00:00:00.002 Started by upstream project "autotest-nightly" build number 3914 00:00:00.002 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3912 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3911 00:00:00.004 originally caused by: 00:00:00.004 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3909 00:00:00.004 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.005 Started by upstream project "autotest-nightly" build number 3908 00:00:00.005 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.085 Fetching changes from the remote Git repository 00:00:00.089 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.135 Using shallow fetch with depth 1 00:00:00.135 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.135 > git --version # timeout=10 00:00:00.199 > git --version # 'git version 2.39.2' 00:00:00.199 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:04.417 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.429 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.442 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:04.442 > git config core.sparsecheckout # timeout=10 00:00:04.454 > git read-tree -mu HEAD # timeout=10 00:00:04.472 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:04.497 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:04.497 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.601 [Pipeline] Start of Pipeline 00:00:04.611 [Pipeline] library 00:00:04.613 Loading library shm_lib@master 00:00:04.613 Library shm_lib@master is cached. Copying from home. 00:00:04.627 [Pipeline] node 00:00:19.628 Still waiting to schedule task 00:00:19.629 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:58.017 Running on VM-host-SM17 in /var/jenkins/workspace/freebsd-vg-autotest_2 00:02:58.019 [Pipeline] { 00:02:58.030 [Pipeline] catchError 00:02:58.032 [Pipeline] { 00:02:58.046 [Pipeline] wrap 00:02:58.058 [Pipeline] { 00:02:58.066 [Pipeline] stage 00:02:58.067 [Pipeline] { (Prologue) 00:02:58.087 [Pipeline] echo 00:02:58.090 Node: VM-host-SM17 00:02:58.097 [Pipeline] cleanWs 00:02:58.106 [WS-CLEANUP] Deleting project workspace... 00:02:58.106 [WS-CLEANUP] Deferred wipeout is used... 00:02:58.112 [WS-CLEANUP] done 00:02:58.311 [Pipeline] setCustomBuildProperty 00:02:58.413 [Pipeline] httpRequest 00:02:58.438 [Pipeline] echo 00:02:58.440 Sorcerer 10.211.164.101 is alive 00:02:58.449 [Pipeline] httpRequest 00:02:58.454 HttpMethod: GET 00:02:58.454 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:02:58.455 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:02:58.455 Response Code: HTTP/1.1 200 OK 00:02:58.456 Success: Status code 200 is in the accepted range: 200,404 00:02:58.456 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:02:58.602 [Pipeline] sh 00:02:58.880 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:02:58.897 [Pipeline] httpRequest 00:02:58.913 [Pipeline] echo 00:02:58.915 Sorcerer 10.211.164.101 is alive 00:02:58.922 [Pipeline] httpRequest 00:02:58.926 HttpMethod: GET 00:02:58.926 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:02:58.927 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:02:58.930 Response Code: HTTP/1.1 200 OK 00:02:58.930 Success: Status code 200 is in the accepted range: 200,404 00:02:58.931 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:03:02.226 [Pipeline] sh 00:03:02.511 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:03:05.810 [Pipeline] sh 00:03:06.144 + git -C spdk log --oneline -n5 00:03:06.144 f7b31b2b9 log: declare g_deprecation_epoch static 00:03:06.144 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:03:06.144 3731556bd lvol: declare g_lvol_if static 00:03:06.144 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:03:06.144 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:03:06.164 [Pipeline] writeFile 00:03:06.177 [Pipeline] sh 00:03:06.454 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:06.466 [Pipeline] sh 00:03:06.745 + cat autorun-spdk.conf 00:03:06.745 SPDK_TEST_UNITTEST=1 00:03:06.745 SPDK_RUN_VALGRIND=0 00:03:06.745 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.745 SPDK_TEST_NVME=1 00:03:06.745 SPDK_TEST_BLOCKDEV=1 00:03:06.745 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.751 RUN_NIGHTLY=1 00:03:06.755 [Pipeline] } 00:03:06.772 [Pipeline] // stage 00:03:06.790 [Pipeline] stage 00:03:06.793 [Pipeline] { (Run VM) 00:03:06.811 [Pipeline] sh 00:03:07.091 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:07.091 + echo 'Start stage prepare_nvme.sh' 00:03:07.091 Start stage prepare_nvme.sh 00:03:07.091 + [[ -n 7 ]] 00:03:07.091 + disk_prefix=ex7 00:03:07.091 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_2 ]] 00:03:07.091 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf ]] 00:03:07.091 + source /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf 00:03:07.091 ++ SPDK_TEST_UNITTEST=1 00:03:07.091 ++ SPDK_RUN_VALGRIND=0 00:03:07.091 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.091 ++ SPDK_TEST_NVME=1 00:03:07.091 ++ SPDK_TEST_BLOCKDEV=1 00:03:07.091 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.091 ++ RUN_NIGHTLY=1 00:03:07.091 + cd /var/jenkins/workspace/freebsd-vg-autotest_2 00:03:07.091 + nvme_files=() 00:03:07.091 + declare -A nvme_files 00:03:07.091 + backend_dir=/var/lib/libvirt/images/backends 00:03:07.091 + nvme_files['nvme.img']=5G 00:03:07.091 + nvme_files['nvme-cmb.img']=5G 00:03:07.091 + nvme_files['nvme-multi0.img']=4G 00:03:07.091 + nvme_files['nvme-multi1.img']=4G 00:03:07.091 + nvme_files['nvme-multi2.img']=4G 00:03:07.091 + nvme_files['nvme-openstack.img']=8G 00:03:07.091 + nvme_files['nvme-zns.img']=5G 00:03:07.091 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:07.091 + (( SPDK_TEST_FTL == 1 )) 00:03:07.091 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:07.091 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:07.091 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:07.092 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:07.092 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:07.092 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:07.092 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:07.092 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:07.092 + for nvme in "${!nvme_files[@]}" 00:03:07.092 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:03:07.092 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:07.092 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:03:07.092 + echo 'End stage prepare_nvme.sh' 00:03:07.092 End stage prepare_nvme.sh 00:03:07.103 [Pipeline] sh 00:03:07.381 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:07.381 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -H -a -v -f freebsd14 00:03:07.381 00:03:07.381 DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant 00:03:07.381 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk 00:03:07.381 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_2 00:03:07.381 HELP=0 00:03:07.381 DRY_RUN=0 00:03:07.381 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img, 00:03:07.381 NVME_DISKS_TYPE=nvme, 00:03:07.381 NVME_AUTO_CREATE=0 00:03:07.381 NVME_DISKS_NAMESPACES=, 00:03:07.381 NVME_CMB=, 00:03:07.381 NVME_PMR=, 00:03:07.381 NVME_ZNS=, 00:03:07.381 NVME_MS=, 00:03:07.381 NVME_FDP=, 00:03:07.381 SPDK_VAGRANT_DISTRO=freebsd14 00:03:07.381 SPDK_VAGRANT_VMCPU=10 00:03:07.381 SPDK_VAGRANT_VMRAM=14336 00:03:07.381 SPDK_VAGRANT_PROVIDER=libvirt 00:03:07.381 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:07.381 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:07.381 SPDK_OPENSTACK_NETWORK=0 00:03:07.381 VAGRANT_PACKAGE_BOX=0 00:03:07.381 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:07.381 FORCE_DISTRO=true 00:03:07.381 VAGRANT_BOX_VERSION= 00:03:07.381 EXTRA_VAGRANTFILES= 00:03:07.381 NIC_MODEL=e1000 00:03:07.381 00:03:07.381 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt' 00:03:07.381 /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest_2 00:03:10.663 Bringing machine 'default' up with 'libvirt' provider... 00:03:11.229 ==> default: Creating image (snapshot of base box volume). 00:03:11.230 ==> default: Creating domain with the following settings... 00:03:11.230 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1721715383_a7fb00d9d8ea4397567f 00:03:11.230 ==> default: -- Domain type: kvm 00:03:11.230 ==> default: -- Cpus: 10 00:03:11.230 ==> default: -- Feature: acpi 00:03:11.230 ==> default: -- Feature: apic 00:03:11.230 ==> default: -- Feature: pae 00:03:11.230 ==> default: -- Memory: 14336M 00:03:11.230 ==> default: -- Memory Backing: hugepages: 00:03:11.230 ==> default: -- Management MAC: 00:03:11.230 ==> default: -- Loader: 00:03:11.230 ==> default: -- Nvram: 00:03:11.230 ==> default: -- Base box: spdk/freebsd14 00:03:11.230 ==> default: -- Storage pool: default 00:03:11.230 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1721715383_a7fb00d9d8ea4397567f.img (32G) 00:03:11.230 ==> default: -- Volume Cache: default 00:03:11.230 ==> default: -- Kernel: 00:03:11.230 ==> default: -- Initrd: 00:03:11.230 ==> default: -- Graphics Type: vnc 00:03:11.230 ==> default: -- Graphics Port: -1 00:03:11.230 ==> default: -- Graphics IP: 127.0.0.1 00:03:11.230 ==> default: -- Graphics Password: Not defined 00:03:11.230 ==> default: -- Video Type: cirrus 00:03:11.230 ==> default: -- Video VRAM: 9216 00:03:11.230 ==> default: -- Sound Type: 00:03:11.230 ==> default: -- Keymap: en-us 00:03:11.230 ==> default: -- TPM Path: 00:03:11.230 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:11.230 ==> default: -- Command line args: 00:03:11.230 ==> default: -> value=-device, 00:03:11.230 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:11.230 ==> default: -> value=-drive, 00:03:11.230 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:03:11.230 ==> default: -> value=-device, 00:03:11.230 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:11.488 ==> default: Creating shared folders metadata... 00:03:11.488 ==> default: Starting domain. 00:03:12.864 ==> default: Waiting for domain to get an IP address... 00:03:34.868 ==> default: Waiting for SSH to become available... 00:03:47.076 ==> default: Configuring and enabling network interfaces... 00:03:51.259 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:03.524 ==> default: Mounting SSHFS shared folder... 00:04:06.052 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:04:06.052 ==> default: Checking Mount.. 00:04:06.986 ==> default: Folder Successfully Mounted! 00:04:06.986 ==> default: Running provisioner: file... 00:04:08.362 default: ~/.gitconfig => .gitconfig 00:04:08.928 00:04:08.928 SUCCESS! 00:04:08.928 00:04:08.928 cd to /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt and type "vagrant ssh" to use. 00:04:08.928 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:08.928 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt" to destroy all trace of vm. 00:04:08.928 00:04:08.938 [Pipeline] } 00:04:08.954 [Pipeline] // stage 00:04:08.962 [Pipeline] dir 00:04:08.962 Running in /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt 00:04:08.964 [Pipeline] { 00:04:08.979 [Pipeline] catchError 00:04:08.981 [Pipeline] { 00:04:08.996 [Pipeline] sh 00:04:09.275 + vagrant ssh-config --host vagrant 00:04:09.275 + sed -ne /^Host/,$p 00:04:09.275 + tee ssh_conf 00:04:13.501 Host vagrant 00:04:13.501 HostName 192.168.121.225 00:04:13.501 User vagrant 00:04:13.501 Port 22 00:04:13.501 UserKnownHostsFile /dev/null 00:04:13.501 StrictHostKeyChecking no 00:04:13.501 PasswordAuthentication no 00:04:13.501 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:04:13.501 IdentitiesOnly yes 00:04:13.501 LogLevel FATAL 00:04:13.501 ForwardAgent yes 00:04:13.501 ForwardX11 yes 00:04:13.501 00:04:13.515 [Pipeline] withEnv 00:04:13.518 [Pipeline] { 00:04:13.534 [Pipeline] sh 00:04:13.858 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:13.858 source /etc/os-release 00:04:13.858 [[ -e /image.version ]] && img=$(< /image.version) 00:04:13.858 # Minimal, systemd-like check. 00:04:13.858 if [[ -e /.dockerenv ]]; then 00:04:13.858 # Clear garbage from the node's name: 00:04:13.858 # agt-er_autotest_547-896 -> autotest_547-896 00:04:13.858 # $HOSTNAME is the actual container id 00:04:13.858 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:13.858 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:13.858 # We can assume this is a mount from a host where container is running, 00:04:13.858 # so fetch its hostname to easily identify the target swarm worker. 00:04:13.858 container="$(< /etc/hostname) ($agent)" 00:04:13.858 else 00:04:13.858 # Fallback 00:04:13.858 container=$agent 00:04:13.858 fi 00:04:13.858 fi 00:04:13.858 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:13.858 00:04:13.869 [Pipeline] } 00:04:13.887 [Pipeline] // withEnv 00:04:13.894 [Pipeline] setCustomBuildProperty 00:04:13.911 [Pipeline] stage 00:04:13.913 [Pipeline] { (Tests) 00:04:13.926 [Pipeline] sh 00:04:14.201 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:14.213 [Pipeline] sh 00:04:14.494 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:14.508 [Pipeline] timeout 00:04:14.508 Timeout set to expire in 1 hr 30 min 00:04:14.510 [Pipeline] { 00:04:14.524 [Pipeline] sh 00:04:14.800 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:15.366 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:04:15.379 [Pipeline] sh 00:04:15.665 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:15.680 [Pipeline] sh 00:04:15.960 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:15.977 [Pipeline] sh 00:04:16.257 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:04:16.257 ++ readlink -f spdk_repo 00:04:16.257 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:16.257 + [[ -n /home/vagrant/spdk_repo ]] 00:04:16.257 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:16.258 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:16.258 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:16.258 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:16.258 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:16.258 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:04:16.258 + cd /home/vagrant/spdk_repo 00:04:16.258 + source /etc/os-release 00:04:16.258 ++ NAME=FreeBSD 00:04:16.258 ++ VERSION=14.0-RELEASE 00:04:16.258 ++ VERSION_ID=14.0 00:04:16.258 ++ ID=freebsd 00:04:16.258 ++ ANSI_COLOR='0;31' 00:04:16.258 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:04:16.258 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:04:16.258 ++ HOME_URL=https://FreeBSD.org/ 00:04:16.258 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:04:16.258 + uname -a 00:04:16.258 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:04:16.258 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:16.516 Contigmem (not present) 00:04:16.516 Buffer Size: not set 00:04:16.516 Num Buffers: not set 00:04:16.516 00:04:16.516 00:04:16.516 Type BDF Vendor Device Driver 00:04:16.516 NVMe 0:16:0 0x1b36 0x0010 nvme0 00:04:16.516 + rm -f /tmp/spdk-ld-path 00:04:16.516 + source autorun-spdk.conf 00:04:16.516 ++ SPDK_TEST_UNITTEST=1 00:04:16.516 ++ SPDK_RUN_VALGRIND=0 00:04:16.516 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:16.516 ++ SPDK_TEST_NVME=1 00:04:16.516 ++ SPDK_TEST_BLOCKDEV=1 00:04:16.516 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:16.516 ++ RUN_NIGHTLY=1 00:04:16.516 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:16.516 + [[ -n '' ]] 00:04:16.516 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:16.516 + for M in /var/spdk/build-*-manifest.txt 00:04:16.516 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:16.516 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:16.516 + for M in /var/spdk/build-*-manifest.txt 00:04:16.516 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:16.516 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:16.516 ++ uname 00:04:16.516 + [[ FreeBSD == \L\i\n\u\x ]] 00:04:16.516 + dmesg_pid=1233 00:04:16.516 + [[ FreeBSD == FreeBSD ]] 00:04:16.516 + export LC_ALL=C LC_CTYPE=C 00:04:16.516 + LC_ALL=C 00:04:16.516 + LC_CTYPE=C 00:04:16.516 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:16.516 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:16.516 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:16.516 + tail -F /var/log/messages 00:04:16.516 + [[ -x /usr/src/fio-static/fio ]] 00:04:16.516 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:16.516 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:16.516 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:16.516 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:04:16.516 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:16.516 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:16.516 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:16.516 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:16.516 Test configuration: 00:04:16.516 SPDK_TEST_UNITTEST=1 00:04:16.516 SPDK_RUN_VALGRIND=0 00:04:16.516 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:16.516 SPDK_TEST_NVME=1 00:04:16.516 SPDK_TEST_BLOCKDEV=1 00:04:16.516 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:16.516 RUN_NIGHTLY=1 06:17:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:16.516 06:17:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:16.516 06:17:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.516 06:17:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.516 06:17:29 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:04:16.516 06:17:29 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:04:16.516 06:17:29 -- paths/export.sh@4 -- $ export PATH 00:04:16.516 06:17:29 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:04:16.516 06:17:29 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:16.516 06:17:29 -- common/autobuild_common.sh@447 -- $ date +%s 00:04:16.516 06:17:29 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721715449.XXXXXX 00:04:16.516 06:17:29 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721715449.XXXXXX.71AFQChvcZ 00:04:16.516 06:17:29 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:04:16.516 06:17:29 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:04:16.516 06:17:29 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:16.517 06:17:29 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:16.517 06:17:29 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:16.517 06:17:29 -- common/autobuild_common.sh@463 -- $ get_config_params 00:04:16.517 06:17:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:04:16.517 06:17:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:16.775 06:17:29 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:04:16.775 06:17:29 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:04:16.775 06:17:29 -- pm/common@17 -- $ local monitor 00:04:16.775 06:17:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.775 06:17:29 -- pm/common@25 -- $ sleep 1 00:04:16.775 06:17:29 -- pm/common@21 -- $ date +%s 00:04:16.775 06:17:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721715449 00:04:16.775 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721715449_collect-vmstat.pm.log 00:04:17.708 06:17:30 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:04:17.708 06:17:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:17.708 06:17:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:17.708 06:17:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:17.708 06:17:30 -- spdk/autobuild.sh@16 -- $ date -u 00:04:17.708 Tue Jul 23 06:17:30 UTC 2024 00:04:17.708 06:17:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:17.708 v24.09-pre-297-gf7b31b2b9 00:04:17.708 06:17:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:17.708 06:17:30 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:04:17.708 06:17:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:17.708 06:17:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:17.708 06:17:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:17.708 06:17:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:17.708 06:17:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:17.708 06:17:30 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:04:17.708 06:17:30 -- spdk/autobuild.sh@58 -- $ unittest_build 00:04:17.709 06:17:30 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:04:17.709 06:17:30 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:04:17.709 06:17:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:17.709 06:17:30 -- common/autotest_common.sh@10 -- $ set +x 00:04:17.709 ************************************ 00:04:17.709 START TEST unittest_build 00:04:17.709 ************************************ 00:04:17.709 06:17:30 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:04:17.709 06:17:30 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:04:18.642 Notice: Vhost, rte_vhost library, virtio, and fuse 00:04:18.642 are only supported on Linux. Turning off default feature. 00:04:18.642 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:18.642 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:19.576 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:04:19.576 Using 'verbs' RDMA provider 00:04:29.809 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:39.787 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:39.787 Creating mk/config.mk...done. 00:04:39.787 Creating mk/cc.flags.mk...done. 00:04:39.787 Type 'gmake' to build. 00:04:39.787 06:17:52 unittest_build -- common/autobuild_common.sh@415 -- $ gmake -j10 00:04:39.787 gmake[1]: Nothing to be done for 'all'. 00:04:43.998 ps: stdin: not a terminal 00:04:48.181 The Meson build system 00:04:48.181 Version: 1.4.0 00:04:48.181 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:48.181 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:48.181 Build type: native build 00:04:48.181 Program cat found: YES (/bin/cat) 00:04:48.181 Project name: DPDK 00:04:48.181 Project version: 24.03.0 00:04:48.181 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:04:48.181 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:04:48.181 Host machine cpu family: x86_64 00:04:48.181 Host machine cpu: x86_64 00:04:48.181 Message: ## Building in Developer Mode ## 00:04:48.181 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:04:48.181 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:48.181 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:48.181 Program python3 found: YES (/usr/local/bin/python3.9) 00:04:48.181 Program cat found: YES (/bin/cat) 00:04:48.181 Compiler for C supports arguments -march=native: YES 00:04:48.181 Checking for size of "void *" : 8 00:04:48.181 Checking for size of "void *" : 8 (cached) 00:04:48.181 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:48.181 Library m found: YES 00:04:48.181 Library numa found: NO 00:04:48.181 Library fdt found: NO 00:04:48.181 Library execinfo found: YES 00:04:48.181 Has header "execinfo.h" : YES 00:04:48.181 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:04:48.181 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:48.181 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:48.181 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:48.181 Run-time dependency openssl found: YES 3.0.13 00:04:48.181 Run-time dependency libpcap found: NO (tried pkgconfig) 00:04:48.181 Library pcap found: YES 00:04:48.181 Has header "pcap.h" with dependency -lpcap: YES 00:04:48.181 Compiler for C supports arguments -Wcast-qual: YES 00:04:48.181 Compiler for C supports arguments -Wdeprecated: YES 00:04:48.181 Compiler for C supports arguments -Wformat: YES 00:04:48.181 Compiler for C supports arguments -Wformat-nonliteral: YES 00:04:48.181 Compiler for C supports arguments -Wformat-security: YES 00:04:48.181 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:48.181 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:48.181 Compiler for C supports arguments -Wnested-externs: YES 00:04:48.181 Compiler for C supports arguments -Wold-style-definition: YES 00:04:48.181 Compiler for C supports arguments -Wpointer-arith: YES 00:04:48.181 Compiler for C supports arguments -Wsign-compare: YES 00:04:48.181 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:48.181 Compiler for C supports arguments -Wundef: YES 00:04:48.181 Compiler for C supports arguments -Wwrite-strings: YES 00:04:48.181 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:48.181 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:04:48.181 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:48.181 Compiler for C supports arguments -mavx512f: YES 00:04:48.181 Checking if "AVX512 checking" compiles: YES 00:04:48.181 Fetching value of define "__SSE4_2__" : 1 00:04:48.181 Fetching value of define "__AES__" : 1 00:04:48.181 Fetching value of define "__AVX__" : 1 00:04:48.181 Fetching value of define "__AVX2__" : 1 00:04:48.181 Fetching value of define "__AVX512BW__" : (undefined) 00:04:48.182 Fetching value of define "__AVX512CD__" : (undefined) 00:04:48.182 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:48.182 Fetching value of define "__AVX512F__" : (undefined) 00:04:48.182 Fetching value of define "__AVX512VL__" : (undefined) 00:04:48.182 Fetching value of define "__PCLMUL__" : 1 00:04:48.182 Fetching value of define "__RDRND__" : 1 00:04:48.182 Fetching value of define "__RDSEED__" : 1 00:04:48.182 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:48.182 Fetching value of define "__znver1__" : (undefined) 00:04:48.182 Fetching value of define "__znver2__" : (undefined) 00:04:48.182 Fetching value of define "__znver3__" : (undefined) 00:04:48.182 Fetching value of define "__znver4__" : (undefined) 00:04:48.182 Compiler for C supports arguments -Wno-format-truncation: NO 00:04:48.182 Message: lib/log: Defining dependency "log" 00:04:48.182 Message: lib/kvargs: Defining dependency "kvargs" 00:04:48.182 Message: lib/telemetry: Defining dependency "telemetry" 00:04:48.182 Checking if "Detect argument count for CPU_OR" compiles: YES 00:04:48.182 Checking for function "getentropy" : YES 00:04:48.182 Message: lib/eal: Defining dependency "eal" 00:04:48.182 Message: lib/ring: Defining dependency "ring" 00:04:48.182 Message: lib/rcu: Defining dependency "rcu" 00:04:48.182 Message: lib/mempool: Defining dependency "mempool" 00:04:48.182 Message: lib/mbuf: Defining dependency "mbuf" 00:04:48.182 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:48.182 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:48.182 Compiler for C supports arguments -mpclmul: YES 00:04:48.182 Compiler for C supports arguments -maes: YES 00:04:48.182 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:48.182 Compiler for C supports arguments -mavx512bw: YES 00:04:48.182 Compiler for C supports arguments -mavx512dq: YES 00:04:48.182 Compiler for C supports arguments -mavx512vl: YES 00:04:48.182 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:48.182 Compiler for C supports arguments -mavx2: YES 00:04:48.182 Compiler for C supports arguments -mavx: YES 00:04:48.182 Message: lib/net: Defining dependency "net" 00:04:48.182 Message: lib/meter: Defining dependency "meter" 00:04:48.182 Message: lib/ethdev: Defining dependency "ethdev" 00:04:48.182 Message: lib/pci: Defining dependency "pci" 00:04:48.182 Message: lib/cmdline: Defining dependency "cmdline" 00:04:48.182 Message: lib/hash: Defining dependency "hash" 00:04:48.182 Message: lib/timer: Defining dependency "timer" 00:04:48.182 Message: lib/compressdev: Defining dependency "compressdev" 00:04:48.182 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:48.182 Message: lib/dmadev: Defining dependency "dmadev" 00:04:48.182 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:48.182 Message: lib/reorder: Defining dependency "reorder" 00:04:48.182 Message: lib/security: Defining dependency "security" 00:04:48.182 Has header "linux/userfaultfd.h" : NO 00:04:48.182 Has header "linux/vduse.h" : NO 00:04:48.182 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:04:48.182 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:48.182 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:48.182 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:48.182 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:48.182 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:48.182 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:48.182 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:04:48.182 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:48.182 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:48.182 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:48.182 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:48.182 Configuring doxy-api-html.conf using configuration 00:04:48.182 Configuring doxy-api-man.conf using configuration 00:04:48.182 Program mandb found: NO 00:04:48.182 Program sphinx-build found: NO 00:04:48.182 Configuring rte_build_config.h using configuration 00:04:48.182 Message: 00:04:48.182 ================= 00:04:48.182 Applications Enabled 00:04:48.182 ================= 00:04:48.182 00:04:48.182 apps: 00:04:48.182 00:04:48.182 00:04:48.182 Message: 00:04:48.182 ================= 00:04:48.182 Libraries Enabled 00:04:48.182 ================= 00:04:48.182 00:04:48.182 libs: 00:04:48.182 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:48.182 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:48.182 cryptodev, dmadev, reorder, security, 00:04:48.182 00:04:48.182 Message: 00:04:48.182 =============== 00:04:48.182 Drivers Enabled 00:04:48.182 =============== 00:04:48.182 00:04:48.182 common: 00:04:48.182 00:04:48.182 bus: 00:04:48.182 pci, vdev, 00:04:48.182 mempool: 00:04:48.182 ring, 00:04:48.182 dma: 00:04:48.182 00:04:48.182 net: 00:04:48.182 00:04:48.182 crypto: 00:04:48.182 00:04:48.182 compress: 00:04:48.182 00:04:48.182 00:04:48.182 Message: 00:04:48.182 ================= 00:04:48.182 Content Skipped 00:04:48.182 ================= 00:04:48.182 00:04:48.182 apps: 00:04:48.182 dumpcap: explicitly disabled via build config 00:04:48.182 graph: explicitly disabled via build config 00:04:48.182 pdump: explicitly disabled via build config 00:04:48.182 proc-info: explicitly disabled via build config 00:04:48.182 test-acl: explicitly disabled via build config 00:04:48.182 test-bbdev: explicitly disabled via build config 00:04:48.182 test-cmdline: explicitly disabled via build config 00:04:48.182 test-compress-perf: explicitly disabled via build config 00:04:48.182 test-crypto-perf: explicitly disabled via build config 00:04:48.182 test-dma-perf: explicitly disabled via build config 00:04:48.182 test-eventdev: explicitly disabled via build config 00:04:48.182 test-fib: explicitly disabled via build config 00:04:48.182 test-flow-perf: explicitly disabled via build config 00:04:48.182 test-gpudev: explicitly disabled via build config 00:04:48.182 test-mldev: explicitly disabled via build config 00:04:48.182 test-pipeline: explicitly disabled via build config 00:04:48.182 test-pmd: explicitly disabled via build config 00:04:48.182 test-regex: explicitly disabled via build config 00:04:48.182 test-sad: explicitly disabled via build config 00:04:48.182 test-security-perf: explicitly disabled via build config 00:04:48.182 00:04:48.182 libs: 00:04:48.182 argparse: explicitly disabled via build config 00:04:48.182 metrics: explicitly disabled via build config 00:04:48.182 acl: explicitly disabled via build config 00:04:48.182 bbdev: explicitly disabled via build config 00:04:48.182 bitratestats: explicitly disabled via build config 00:04:48.182 bpf: explicitly disabled via build config 00:04:48.182 cfgfile: explicitly disabled via build config 00:04:48.182 distributor: explicitly disabled via build config 00:04:48.182 efd: explicitly disabled via build config 00:04:48.182 eventdev: explicitly disabled via build config 00:04:48.182 dispatcher: explicitly disabled via build config 00:04:48.182 gpudev: explicitly disabled via build config 00:04:48.182 gro: explicitly disabled via build config 00:04:48.182 gso: explicitly disabled via build config 00:04:48.182 ip_frag: explicitly disabled via build config 00:04:48.182 jobstats: explicitly disabled via build config 00:04:48.182 latencystats: explicitly disabled via build config 00:04:48.182 lpm: explicitly disabled via build config 00:04:48.182 member: explicitly disabled via build config 00:04:48.182 pcapng: explicitly disabled via build config 00:04:48.182 power: only supported on Linux 00:04:48.182 rawdev: explicitly disabled via build config 00:04:48.182 regexdev: explicitly disabled via build config 00:04:48.182 mldev: explicitly disabled via build config 00:04:48.182 rib: explicitly disabled via build config 00:04:48.182 sched: explicitly disabled via build config 00:04:48.182 stack: explicitly disabled via build config 00:04:48.182 vhost: only supported on Linux 00:04:48.182 ipsec: explicitly disabled via build config 00:04:48.182 pdcp: explicitly disabled via build config 00:04:48.182 fib: explicitly disabled via build config 00:04:48.182 port: explicitly disabled via build config 00:04:48.182 pdump: explicitly disabled via build config 00:04:48.182 table: explicitly disabled via build config 00:04:48.182 pipeline: explicitly disabled via build config 00:04:48.182 graph: explicitly disabled via build config 00:04:48.182 node: explicitly disabled via build config 00:04:48.182 00:04:48.182 drivers: 00:04:48.182 common/cpt: not in enabled drivers build config 00:04:48.182 common/dpaax: not in enabled drivers build config 00:04:48.182 common/iavf: not in enabled drivers build config 00:04:48.182 common/idpf: not in enabled drivers build config 00:04:48.182 common/ionic: not in enabled drivers build config 00:04:48.182 common/mvep: not in enabled drivers build config 00:04:48.182 common/octeontx: not in enabled drivers build config 00:04:48.182 bus/auxiliary: not in enabled drivers build config 00:04:48.182 bus/cdx: not in enabled drivers build config 00:04:48.182 bus/dpaa: not in enabled drivers build config 00:04:48.182 bus/fslmc: not in enabled drivers build config 00:04:48.182 bus/ifpga: not in enabled drivers build config 00:04:48.182 bus/platform: not in enabled drivers build config 00:04:48.182 bus/uacce: not in enabled drivers build config 00:04:48.182 bus/vmbus: not in enabled drivers build config 00:04:48.182 common/cnxk: not in enabled drivers build config 00:04:48.182 common/mlx5: not in enabled drivers build config 00:04:48.182 common/nfp: not in enabled drivers build config 00:04:48.182 common/nitrox: not in enabled drivers build config 00:04:48.182 common/qat: not in enabled drivers build config 00:04:48.183 common/sfc_efx: not in enabled drivers build config 00:04:48.183 mempool/bucket: not in enabled drivers build config 00:04:48.183 mempool/cnxk: not in enabled drivers build config 00:04:48.183 mempool/dpaa: not in enabled drivers build config 00:04:48.183 mempool/dpaa2: not in enabled drivers build config 00:04:48.183 mempool/octeontx: not in enabled drivers build config 00:04:48.183 mempool/stack: not in enabled drivers build config 00:04:48.183 dma/cnxk: not in enabled drivers build config 00:04:48.183 dma/dpaa: not in enabled drivers build config 00:04:48.183 dma/dpaa2: not in enabled drivers build config 00:04:48.183 dma/hisilicon: not in enabled drivers build config 00:04:48.183 dma/idxd: not in enabled drivers build config 00:04:48.183 dma/ioat: not in enabled drivers build config 00:04:48.183 dma/skeleton: not in enabled drivers build config 00:04:48.183 net/af_packet: not in enabled drivers build config 00:04:48.183 net/af_xdp: not in enabled drivers build config 00:04:48.183 net/ark: not in enabled drivers build config 00:04:48.183 net/atlantic: not in enabled drivers build config 00:04:48.183 net/avp: not in enabled drivers build config 00:04:48.183 net/axgbe: not in enabled drivers build config 00:04:48.183 net/bnx2x: not in enabled drivers build config 00:04:48.183 net/bnxt: not in enabled drivers build config 00:04:48.183 net/bonding: not in enabled drivers build config 00:04:48.183 net/cnxk: not in enabled drivers build config 00:04:48.183 net/cpfl: not in enabled drivers build config 00:04:48.183 net/cxgbe: not in enabled drivers build config 00:04:48.183 net/dpaa: not in enabled drivers build config 00:04:48.183 net/dpaa2: not in enabled drivers build config 00:04:48.183 net/e1000: not in enabled drivers build config 00:04:48.183 net/ena: not in enabled drivers build config 00:04:48.183 net/enetc: not in enabled drivers build config 00:04:48.183 net/enetfec: not in enabled drivers build config 00:04:48.183 net/enic: not in enabled drivers build config 00:04:48.183 net/failsafe: not in enabled drivers build config 00:04:48.183 net/fm10k: not in enabled drivers build config 00:04:48.183 net/gve: not in enabled drivers build config 00:04:48.183 net/hinic: not in enabled drivers build config 00:04:48.183 net/hns3: not in enabled drivers build config 00:04:48.183 net/i40e: not in enabled drivers build config 00:04:48.183 net/iavf: not in enabled drivers build config 00:04:48.183 net/ice: not in enabled drivers build config 00:04:48.183 net/idpf: not in enabled drivers build config 00:04:48.183 net/igc: not in enabled drivers build config 00:04:48.183 net/ionic: not in enabled drivers build config 00:04:48.183 net/ipn3ke: not in enabled drivers build config 00:04:48.183 net/ixgbe: not in enabled drivers build config 00:04:48.183 net/mana: not in enabled drivers build config 00:04:48.183 net/memif: not in enabled drivers build config 00:04:48.183 net/mlx4: not in enabled drivers build config 00:04:48.183 net/mlx5: not in enabled drivers build config 00:04:48.183 net/mvneta: not in enabled drivers build config 00:04:48.183 net/mvpp2: not in enabled drivers build config 00:04:48.183 net/netvsc: not in enabled drivers build config 00:04:48.183 net/nfb: not in enabled drivers build config 00:04:48.183 net/nfp: not in enabled drivers build config 00:04:48.183 net/ngbe: not in enabled drivers build config 00:04:48.183 net/null: not in enabled drivers build config 00:04:48.183 net/octeontx: not in enabled drivers build config 00:04:48.183 net/octeon_ep: not in enabled drivers build config 00:04:48.183 net/pcap: not in enabled drivers build config 00:04:48.183 net/pfe: not in enabled drivers build config 00:04:48.183 net/qede: not in enabled drivers build config 00:04:48.183 net/ring: not in enabled drivers build config 00:04:48.183 net/sfc: not in enabled drivers build config 00:04:48.183 net/softnic: not in enabled drivers build config 00:04:48.183 net/tap: not in enabled drivers build config 00:04:48.183 net/thunderx: not in enabled drivers build config 00:04:48.183 net/txgbe: not in enabled drivers build config 00:04:48.183 net/vdev_netvsc: not in enabled drivers build config 00:04:48.183 net/vhost: not in enabled drivers build config 00:04:48.183 net/virtio: not in enabled drivers build config 00:04:48.183 net/vmxnet3: not in enabled drivers build config 00:04:48.183 raw/*: missing internal dependency, "rawdev" 00:04:48.183 crypto/armv8: not in enabled drivers build config 00:04:48.183 crypto/bcmfs: not in enabled drivers build config 00:04:48.183 crypto/caam_jr: not in enabled drivers build config 00:04:48.183 crypto/ccp: not in enabled drivers build config 00:04:48.183 crypto/cnxk: not in enabled drivers build config 00:04:48.183 crypto/dpaa_sec: not in enabled drivers build config 00:04:48.183 crypto/dpaa2_sec: not in enabled drivers build config 00:04:48.183 crypto/ipsec_mb: not in enabled drivers build config 00:04:48.183 crypto/mlx5: not in enabled drivers build config 00:04:48.183 crypto/mvsam: not in enabled drivers build config 00:04:48.183 crypto/nitrox: not in enabled drivers build config 00:04:48.183 crypto/null: not in enabled drivers build config 00:04:48.183 crypto/octeontx: not in enabled drivers build config 00:04:48.183 crypto/openssl: not in enabled drivers build config 00:04:48.183 crypto/scheduler: not in enabled drivers build config 00:04:48.183 crypto/uadk: not in enabled drivers build config 00:04:48.183 crypto/virtio: not in enabled drivers build config 00:04:48.183 compress/isal: not in enabled drivers build config 00:04:48.183 compress/mlx5: not in enabled drivers build config 00:04:48.183 compress/nitrox: not in enabled drivers build config 00:04:48.183 compress/octeontx: not in enabled drivers build config 00:04:48.183 compress/zlib: not in enabled drivers build config 00:04:48.183 regex/*: missing internal dependency, "regexdev" 00:04:48.183 ml/*: missing internal dependency, "mldev" 00:04:48.183 vdpa/*: missing internal dependency, "vhost" 00:04:48.183 event/*: missing internal dependency, "eventdev" 00:04:48.183 baseband/*: missing internal dependency, "bbdev" 00:04:48.183 gpu/*: missing internal dependency, "gpudev" 00:04:48.183 00:04:48.183 00:04:48.183 Build targets in project: 81 00:04:48.183 00:04:48.183 DPDK 24.03.0 00:04:48.183 00:04:48.183 User defined options 00:04:48.183 buildtype : debug 00:04:48.183 default_library : static 00:04:48.183 libdir : lib 00:04:48.183 prefix : / 00:04:48.183 c_args : -fPIC -Werror 00:04:48.183 c_link_args : 00:04:48.183 cpu_instruction_set: native 00:04:48.183 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:48.183 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:48.183 enable_docs : false 00:04:48.183 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:48.183 enable_kmods : true 00:04:48.183 max_lcores : 128 00:04:48.183 tests : false 00:04:48.183 00:04:48.183 Found ninja-1.11.1 at /usr/local/bin/ninja 00:04:48.450 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:48.450 [1/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:48.450 [2/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:04:48.450 [3/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:48.450 [4/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:48.720 [5/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:48.720 [6/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:48.720 [7/233] Linking static target lib/librte_kvargs.a 00:04:48.720 [8/233] Linking static target lib/librte_log.a 00:04:48.979 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:48.979 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:48.979 [11/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:48.979 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:48.979 [13/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:48.979 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:49.237 [15/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:49.237 [16/233] Linking static target lib/librte_telemetry.a 00:04:49.237 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:49.237 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:49.237 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:49.237 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:49.495 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:49.495 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:49.495 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:49.495 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:49.495 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:49.495 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:49.754 [27/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.754 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:49.754 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:49.754 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:49.754 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:49.754 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:49.754 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:49.754 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:49.754 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:50.012 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:50.012 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:50.012 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:50.012 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:50.012 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:50.012 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:50.270 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:50.270 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:50.270 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:50.270 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:50.270 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:50.528 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:50.528 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:50.528 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:50.528 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:50.528 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:04:50.529 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:50.529 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:50.787 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:50.787 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:50.787 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:50.787 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:50.787 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:51.046 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:04:51.046 [60/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:51.046 [61/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:04:51.046 [62/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:04:51.046 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:51.046 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:51.046 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:04:51.046 [66/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.046 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:04:51.304 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:04:51.304 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:04:51.304 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:04:51.304 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:04:51.304 [72/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:04:51.563 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:51.563 [74/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:51.563 [75/233] Linking static target lib/librte_ring.a 00:04:51.563 [76/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:51.563 [77/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:51.822 [78/233] Linking static target lib/librte_rcu.a 00:04:51.822 [79/233] Linking static target lib/librte_eal.a 00:04:51.822 [80/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:51.822 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:51.822 [82/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:51.822 [83/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:51.822 [84/233] Linking static target lib/librte_mempool.a 00:04:51.822 [85/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.822 [86/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:51.822 [87/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:51.822 [88/233] Linking target lib/librte_log.so.24.1 00:04:52.140 [89/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:52.140 [90/233] Linking target lib/librte_kvargs.so.24.1 00:04:52.140 [91/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:52.140 [92/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.140 [93/233] Linking target lib/librte_telemetry.so.24.1 00:04:52.140 [94/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.140 [95/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:52.140 [96/233] Linking static target lib/librte_mbuf.a 00:04:52.140 [97/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:52.140 [98/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:52.140 [99/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:52.140 [100/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:52.399 [101/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:52.399 [102/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:52.399 [103/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:52.399 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:52.399 [105/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:52.399 [106/233] Linking static target lib/librte_net.a 00:04:52.657 [107/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:52.657 [108/233] Linking static target lib/librte_meter.a 00:04:52.657 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:52.657 [110/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.657 [111/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.916 [112/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:52.916 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:52.916 [114/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.916 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:53.175 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:53.434 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:53.434 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:53.434 [119/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:53.434 [120/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:53.434 [121/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:53.434 [122/233] Linking static target lib/librte_pci.a 00:04:53.434 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:53.434 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:53.434 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:53.434 [126/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:53.434 [127/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:53.694 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:53.694 [129/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:53.694 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:53.694 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:53.694 [132/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:53.694 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:53.694 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:53.694 [135/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.694 [136/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:53.694 [137/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:53.694 [138/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:53.694 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:53.694 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:53.953 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:53.953 [142/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.953 [143/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:53.953 [144/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:53.953 [145/233] Linking static target lib/librte_cmdline.a 00:04:54.212 [146/233] Linking static target lib/librte_ethdev.a 00:04:54.212 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:54.212 [148/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:54.212 [149/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:54.212 [150/233] Linking static target lib/librte_timer.a 00:04:54.212 [151/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:54.212 [152/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:54.212 [153/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:54.470 [154/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:54.470 [155/233] Linking static target lib/librte_hash.a 00:04:54.470 [156/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:54.728 [157/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:54.728 [158/233] Linking static target lib/librte_compressdev.a 00:04:54.728 [159/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.728 [160/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:54.728 [161/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:54.728 [162/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:54.986 [163/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:54.986 [164/233] Linking static target lib/librte_dmadev.a 00:04:54.986 [165/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:54.986 [166/233] Linking static target lib/librte_reorder.a 00:04:55.245 [167/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.245 [168/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:55.245 [169/233] Linking static target lib/librte_cryptodev.a 00:04:55.245 [170/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.245 [171/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.245 [172/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:55.245 [173/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:55.245 [174/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.245 [175/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:55.245 [176/233] Linking static target lib/librte_security.a 00:04:55.245 [177/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.245 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:55.504 [179/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:04:55.504 [180/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:55.504 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.763 [182/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:55.763 [183/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:55.763 [184/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:55.763 [185/233] Linking static target drivers/librte_bus_pci.a 00:04:55.763 [186/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:55.763 [187/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:55.763 [188/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.763 [189/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:55.763 [190/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:56.022 [191/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.022 [192/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:56.022 [193/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:56.022 [194/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:56.022 [195/233] Linking static target drivers/librte_bus_vdev.a 00:04:56.022 [196/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:56.022 [197/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:56.022 [198/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:56.022 [199/233] Linking static target drivers/librte_mempool_ring.a 00:04:56.282 [200/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.541 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:04:56.541 machine -> /usr/src/sys/amd64/include 00:04:56.541 x86 -> /usr/src/sys/x86/include 00:04:56.541 i386 -> /usr/src/sys/i386/include 00:04:56.541 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:04:56.541 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:04:56.541 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:04:56.541 touch opt_global.h 00:04:56.541 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:04:56.541 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:04:56.541 :> export_syms 00:04:56.541 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:04:56.541 objcopy --strip-debug contigmem.ko 00:04:56.800 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:04:56.801 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:04:56.801 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:04:56.801 :> export_syms 00:04:56.801 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:04:56.801 objcopy --strip-debug nic_uio.ko 00:05:00.090 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.621 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.621 [205/233] Linking target lib/librte_eal.so.24.1 00:05:02.621 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:02.621 [207/233] Linking target lib/librte_pci.so.24.1 00:05:02.621 [208/233] Linking target drivers/librte_bus_vdev.so.24.1 00:05:02.621 [209/233] Linking target lib/librte_meter.so.24.1 00:05:02.621 [210/233] Linking target lib/librte_ring.so.24.1 00:05:02.621 [211/233] Linking target lib/librte_timer.so.24.1 00:05:02.621 [212/233] Linking target lib/librte_dmadev.so.24.1 00:05:02.621 [213/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:02.621 [214/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:02.621 [215/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:02.621 [216/233] Linking target lib/librte_rcu.so.24.1 00:05:02.621 [217/233] Linking target drivers/librte_bus_pci.so.24.1 00:05:02.621 [218/233] Linking target lib/librte_mempool.so.24.1 00:05:02.621 [219/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:02.621 [220/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:02.621 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:05:02.621 [222/233] Linking target lib/librte_mbuf.so.24.1 00:05:02.879 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:02.879 [224/233] Linking target lib/librte_net.so.24.1 00:05:02.879 [225/233] Linking target lib/librte_compressdev.so.24.1 00:05:02.879 [226/233] Linking target lib/librte_reorder.so.24.1 00:05:02.879 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:05:02.879 [228/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:02.879 [229/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:03.138 [230/233] Linking target lib/librte_hash.so.24.1 00:05:03.138 [231/233] Linking target lib/librte_security.so.24.1 00:05:03.138 [232/233] Linking target lib/librte_cmdline.so.24.1 00:05:03.138 [233/233] Linking target lib/librte_ethdev.so.24.1 00:05:03.138 INFO: autodetecting backend as ninja 00:05:03.138 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:03.705 CC lib/ut_mock/mock.o 00:05:03.705 CC lib/ut/ut.o 00:05:03.705 CC lib/log/log.o 00:05:03.705 CC lib/log/log_flags.o 00:05:03.705 CC lib/log/log_deprecated.o 00:05:03.964 LIB libspdk_ut_mock.a 00:05:03.964 LIB libspdk_ut.a 00:05:03.964 LIB libspdk_log.a 00:05:03.964 CC lib/dma/dma.o 00:05:03.964 CC lib/util/base64.o 00:05:03.964 CC lib/ioat/ioat.o 00:05:03.964 CC lib/util/bit_array.o 00:05:03.964 CC lib/util/cpuset.o 00:05:03.964 CC lib/util/crc16.o 00:05:03.964 CC lib/util/crc32.o 00:05:03.964 CC lib/util/crc32c.o 00:05:03.964 CC lib/util/crc32_ieee.o 00:05:03.964 CXX lib/trace_parser/trace.o 00:05:04.222 CC lib/util/crc64.o 00:05:04.222 CC lib/util/dif.o 00:05:04.222 CC lib/util/fd.o 00:05:04.222 CC lib/util/fd_group.o 00:05:04.222 CC lib/util/file.o 00:05:04.222 CC lib/util/hexlify.o 00:05:04.222 LIB libspdk_dma.a 00:05:04.222 CC lib/util/iov.o 00:05:04.222 CC lib/util/math.o 00:05:04.222 LIB libspdk_ioat.a 00:05:04.222 CC lib/util/net.o 00:05:04.222 CC lib/util/pipe.o 00:05:04.222 CC lib/util/strerror_tls.o 00:05:04.222 CC lib/util/string.o 00:05:04.222 CC lib/util/uuid.o 00:05:04.222 CC lib/util/xor.o 00:05:04.222 CC lib/util/zipf.o 00:05:04.480 LIB libspdk_util.a 00:05:04.480 CC lib/idxd/idxd.o 00:05:04.480 CC lib/idxd/idxd_user.o 00:05:04.480 CC lib/vmd/led.o 00:05:04.480 CC lib/vmd/vmd.o 00:05:04.480 CC lib/conf/conf.o 00:05:04.480 CC lib/json/json_parse.o 00:05:04.480 CC lib/env_dpdk/env.o 00:05:04.480 CC lib/rdma_utils/rdma_utils.o 00:05:04.480 CC lib/rdma_provider/common.o 00:05:04.480 CC lib/env_dpdk/memory.o 00:05:04.480 CC lib/env_dpdk/pci.o 00:05:04.480 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:04.480 CC lib/json/json_util.o 00:05:04.480 LIB libspdk_conf.a 00:05:04.480 CC lib/env_dpdk/init.o 00:05:04.480 LIB libspdk_rdma_utils.a 00:05:04.738 CC lib/json/json_write.o 00:05:04.738 LIB libspdk_idxd.a 00:05:04.738 LIB libspdk_vmd.a 00:05:04.738 CC lib/env_dpdk/threads.o 00:05:04.738 CC lib/env_dpdk/pci_ioat.o 00:05:04.738 LIB libspdk_rdma_provider.a 00:05:04.738 CC lib/env_dpdk/pci_virtio.o 00:05:04.738 CC lib/env_dpdk/pci_vmd.o 00:05:04.738 CC lib/env_dpdk/pci_idxd.o 00:05:04.738 CC lib/env_dpdk/pci_event.o 00:05:04.738 CC lib/env_dpdk/sigbus_handler.o 00:05:04.738 CC lib/env_dpdk/pci_dpdk.o 00:05:04.738 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:04.738 LIB libspdk_json.a 00:05:04.738 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:04.738 CC lib/jsonrpc/jsonrpc_server.o 00:05:04.738 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:04.738 CC lib/jsonrpc/jsonrpc_client.o 00:05:04.738 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:05.040 LIB libspdk_jsonrpc.a 00:05:05.040 CC lib/rpc/rpc.o 00:05:05.298 LIB libspdk_env_dpdk.a 00:05:05.298 LIB libspdk_rpc.a 00:05:05.298 CC lib/trace/trace.o 00:05:05.298 CC lib/trace/trace_flags.o 00:05:05.298 CC lib/trace/trace_rpc.o 00:05:05.298 CC lib/keyring/keyring.o 00:05:05.298 CC lib/keyring/keyring_rpc.o 00:05:05.298 CC lib/notify/notify_rpc.o 00:05:05.298 CC lib/notify/notify.o 00:05:05.556 LIB libspdk_notify.a 00:05:05.556 LIB libspdk_keyring.a 00:05:05.556 LIB libspdk_trace.a 00:05:05.556 LIB libspdk_trace_parser.a 00:05:05.556 CC lib/sock/sock.o 00:05:05.556 CC lib/sock/sock_rpc.o 00:05:05.556 CC lib/thread/thread.o 00:05:05.556 CC lib/thread/iobuf.o 00:05:05.815 LIB libspdk_sock.a 00:05:05.815 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:05.815 CC lib/nvme/nvme_ctrlr.o 00:05:05.815 CC lib/nvme/nvme_fabric.o 00:05:05.815 CC lib/nvme/nvme_ns_cmd.o 00:05:05.815 CC lib/nvme/nvme_ns.o 00:05:05.815 CC lib/nvme/nvme_pcie_common.o 00:05:05.815 CC lib/nvme/nvme_pcie.o 00:05:05.815 CC lib/nvme/nvme_qpair.o 00:05:05.815 CC lib/nvme/nvme.o 00:05:05.815 LIB libspdk_thread.a 00:05:05.815 CC lib/nvme/nvme_quirks.o 00:05:06.381 CC lib/nvme/nvme_transport.o 00:05:06.381 CC lib/nvme/nvme_discovery.o 00:05:06.381 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:06.381 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:06.381 CC lib/nvme/nvme_tcp.o 00:05:06.381 CC lib/accel/accel.o 00:05:06.381 CC lib/blob/blobstore.o 00:05:06.381 CC lib/init/json_config.o 00:05:06.381 CC lib/blob/request.o 00:05:06.639 CC lib/init/subsystem.o 00:05:06.639 CC lib/nvme/nvme_opal.o 00:05:06.639 CC lib/init/subsystem_rpc.o 00:05:06.639 CC lib/accel/accel_rpc.o 00:05:06.639 CC lib/init/rpc.o 00:05:06.639 CC lib/accel/accel_sw.o 00:05:06.639 LIB libspdk_init.a 00:05:06.639 CC lib/nvme/nvme_io_msg.o 00:05:06.639 CC lib/blob/zeroes.o 00:05:06.898 LIB libspdk_accel.a 00:05:06.898 CC lib/blob/blob_bs_dev.o 00:05:06.898 CC lib/nvme/nvme_poll_group.o 00:05:06.898 CC lib/event/app.o 00:05:06.898 CC lib/event/reactor.o 00:05:06.898 CC lib/event/log_rpc.o 00:05:06.898 CC lib/event/app_rpc.o 00:05:06.898 CC lib/event/scheduler_static.o 00:05:06.898 CC lib/bdev/bdev.o 00:05:06.898 CC lib/nvme/nvme_zns.o 00:05:06.898 CC lib/bdev/bdev_rpc.o 00:05:06.898 LIB libspdk_blob.a 00:05:06.898 CC lib/nvme/nvme_stubs.o 00:05:06.898 CC lib/bdev/bdev_zone.o 00:05:07.158 LIB libspdk_event.a 00:05:07.158 CC lib/bdev/part.o 00:05:07.158 CC lib/blobfs/blobfs.o 00:05:07.158 CC lib/nvme/nvme_auth.o 00:05:07.158 CC lib/blobfs/tree.o 00:05:07.158 CC lib/nvme/nvme_rdma.o 00:05:07.158 CC lib/lvol/lvol.o 00:05:07.158 CC lib/bdev/scsi_nvme.o 00:05:07.158 LIB libspdk_blobfs.a 00:05:07.417 LIB libspdk_lvol.a 00:05:07.417 LIB libspdk_bdev.a 00:05:07.676 CC lib/scsi/dev.o 00:05:07.676 CC lib/scsi/lun.o 00:05:07.676 CC lib/scsi/port.o 00:05:07.676 CC lib/scsi/scsi.o 00:05:07.676 CC lib/scsi/scsi_bdev.o 00:05:07.676 CC lib/scsi/scsi_pr.o 00:05:07.676 CC lib/scsi/scsi_rpc.o 00:05:07.676 CC lib/scsi/task.o 00:05:07.934 LIB libspdk_scsi.a 00:05:07.934 LIB libspdk_nvme.a 00:05:07.934 CC lib/iscsi/init_grp.o 00:05:07.934 CC lib/iscsi/conn.o 00:05:07.934 CC lib/iscsi/iscsi.o 00:05:07.934 CC lib/iscsi/md5.o 00:05:07.934 CC lib/iscsi/param.o 00:05:07.934 CC lib/iscsi/portal_grp.o 00:05:07.934 CC lib/iscsi/iscsi_rpc.o 00:05:07.934 CC lib/iscsi/iscsi_subsystem.o 00:05:07.934 CC lib/iscsi/tgt_node.o 00:05:07.934 CC lib/nvmf/ctrlr.o 00:05:08.193 CC lib/nvmf/ctrlr_discovery.o 00:05:08.193 CC lib/nvmf/ctrlr_bdev.o 00:05:08.193 CC lib/nvmf/subsystem.o 00:05:08.193 CC lib/nvmf/nvmf.o 00:05:08.193 CC lib/nvmf/nvmf_rpc.o 00:05:08.193 CC lib/nvmf/transport.o 00:05:08.193 CC lib/iscsi/task.o 00:05:08.193 CC lib/nvmf/tcp.o 00:05:08.193 CC lib/nvmf/stubs.o 00:05:08.193 CC lib/nvmf/mdns_server.o 00:05:08.193 CC lib/nvmf/rdma.o 00:05:08.193 CC lib/nvmf/auth.o 00:05:08.193 LIB libspdk_iscsi.a 00:05:08.760 LIB libspdk_nvmf.a 00:05:08.760 CC module/env_dpdk/env_dpdk_rpc.o 00:05:08.760 CC module/accel/error/accel_error.o 00:05:08.760 CC module/accel/error/accel_error_rpc.o 00:05:08.760 CC module/keyring/file/keyring.o 00:05:08.760 CC module/accel/ioat/accel_ioat.o 00:05:08.760 CC module/blob/bdev/blob_bdev.o 00:05:08.760 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:08.760 CC module/accel/dsa/accel_dsa.o 00:05:08.760 CC module/accel/iaa/accel_iaa.o 00:05:08.760 CC module/sock/posix/posix.o 00:05:08.760 LIB libspdk_env_dpdk_rpc.a 00:05:09.019 CC module/accel/ioat/accel_ioat_rpc.o 00:05:09.019 CC module/accel/iaa/accel_iaa_rpc.o 00:05:09.019 CC module/keyring/file/keyring_rpc.o 00:05:09.019 LIB libspdk_accel_error.a 00:05:09.019 LIB libspdk_scheduler_dynamic.a 00:05:09.019 CC module/accel/dsa/accel_dsa_rpc.o 00:05:09.019 LIB libspdk_blob_bdev.a 00:05:09.019 LIB libspdk_accel_ioat.a 00:05:09.019 LIB libspdk_keyring_file.a 00:05:09.019 LIB libspdk_accel_iaa.a 00:05:09.019 LIB libspdk_accel_dsa.a 00:05:09.019 CC module/bdev/gpt/gpt.o 00:05:09.019 CC module/bdev/error/vbdev_error.o 00:05:09.019 CC module/bdev/lvol/vbdev_lvol.o 00:05:09.019 CC module/bdev/delay/vbdev_delay.o 00:05:09.019 CC module/blobfs/bdev/blobfs_bdev.o 00:05:09.019 CC module/bdev/malloc/bdev_malloc.o 00:05:09.019 CC module/bdev/nvme/bdev_nvme.o 00:05:09.019 CC module/bdev/passthru/vbdev_passthru.o 00:05:09.019 CC module/bdev/null/bdev_null.o 00:05:09.019 LIB libspdk_sock_posix.a 00:05:09.277 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:09.277 CC module/bdev/gpt/vbdev_gpt.o 00:05:09.277 CC module/bdev/null/bdev_null_rpc.o 00:05:09.277 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:09.278 CC module/bdev/error/vbdev_error_rpc.o 00:05:09.278 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:09.278 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:09.278 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:09.278 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:09.278 LIB libspdk_bdev_gpt.a 00:05:09.278 CC module/bdev/nvme/nvme_rpc.o 00:05:09.278 LIB libspdk_bdev_error.a 00:05:09.278 LIB libspdk_bdev_lvol.a 00:05:09.278 LIB libspdk_bdev_null.a 00:05:09.278 LIB libspdk_bdev_passthru.a 00:05:09.278 CC module/bdev/nvme/bdev_mdns_client.o 00:05:09.278 LIB libspdk_bdev_malloc.a 00:05:09.278 LIB libspdk_bdev_delay.a 00:05:09.278 LIB libspdk_blobfs_bdev.a 00:05:09.278 CC module/bdev/raid/bdev_raid.o 00:05:09.278 CC module/bdev/raid/bdev_raid_sb.o 00:05:09.278 CC module/bdev/raid/bdev_raid_rpc.o 00:05:09.278 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:09.278 CC module/bdev/aio/bdev_aio.o 00:05:09.278 CC module/bdev/split/vbdev_split.o 00:05:09.536 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:09.536 CC module/bdev/aio/bdev_aio_rpc.o 00:05:09.536 CC module/bdev/raid/raid0.o 00:05:09.536 CC module/bdev/split/vbdev_split_rpc.o 00:05:09.536 CC module/bdev/raid/raid1.o 00:05:09.536 CC module/bdev/raid/concat.o 00:05:09.536 LIB libspdk_bdev_aio.a 00:05:09.536 LIB libspdk_bdev_zone_block.a 00:05:09.536 LIB libspdk_bdev_split.a 00:05:09.536 LIB libspdk_bdev_nvme.a 00:05:09.536 LIB libspdk_bdev_raid.a 00:05:09.794 CC module/event/subsystems/iobuf/iobuf.o 00:05:09.794 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:09.795 CC module/event/subsystems/sock/sock.o 00:05:09.795 CC module/event/subsystems/scheduler/scheduler.o 00:05:09.795 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:09.795 CC module/event/subsystems/vmd/vmd.o 00:05:09.795 CC module/event/subsystems/keyring/keyring.o 00:05:10.053 LIB libspdk_event_keyring.a 00:05:10.053 LIB libspdk_event_vmd.a 00:05:10.053 LIB libspdk_event_scheduler.a 00:05:10.053 LIB libspdk_event_sock.a 00:05:10.053 LIB libspdk_event_iobuf.a 00:05:10.053 CC module/event/subsystems/accel/accel.o 00:05:10.311 LIB libspdk_event_accel.a 00:05:10.311 CC module/event/subsystems/bdev/bdev.o 00:05:10.311 LIB libspdk_event_bdev.a 00:05:10.569 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:10.569 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:10.569 CC module/event/subsystems/scsi/scsi.o 00:05:10.569 LIB libspdk_event_scsi.a 00:05:10.569 LIB libspdk_event_nvmf.a 00:05:10.829 CC module/event/subsystems/iscsi/iscsi.o 00:05:10.829 LIB libspdk_event_iscsi.a 00:05:11.092 CC app/spdk_nvme_perf/perf.o 00:05:11.092 CC app/trace_record/trace_record.o 00:05:11.092 CXX app/trace/trace.o 00:05:11.092 CC app/spdk_lspci/spdk_lspci.o 00:05:11.092 CC app/spdk_nvme_identify/identify.o 00:05:11.092 CC test/thread/poller_perf/poller_perf.o 00:05:11.092 CC app/nvmf_tgt/nvmf_main.o 00:05:11.092 CC app/spdk_tgt/spdk_tgt.o 00:05:11.092 CC examples/util/zipf/zipf.o 00:05:11.092 CC app/iscsi_tgt/iscsi_tgt.o 00:05:11.092 LINK spdk_lspci 00:05:11.092 LINK spdk_trace_record 00:05:11.092 LINK poller_perf 00:05:11.092 LINK zipf 00:05:11.092 LINK spdk_tgt 00:05:11.092 LINK nvmf_tgt 00:05:11.350 LINK iscsi_tgt 00:05:11.351 CC test/thread/lock/spdk_lock.o 00:05:11.351 CC examples/ioat/perf/perf.o 00:05:11.351 CC test/dma/test_dma/test_dma.o 00:05:11.351 LINK spdk_nvme_perf 00:05:11.351 CC examples/ioat/verify/verify.o 00:05:11.351 LINK spdk_nvme_identify 00:05:11.351 CC test/app/bdev_svc/bdev_svc.o 00:05:11.351 LINK ioat_perf 00:05:11.351 CC examples/thread/thread/thread_ex.o 00:05:11.351 LINK verify 00:05:11.351 TEST_HEADER include/spdk/accel.h 00:05:11.351 TEST_HEADER include/spdk/accel_module.h 00:05:11.351 TEST_HEADER include/spdk/assert.h 00:05:11.351 TEST_HEADER include/spdk/barrier.h 00:05:11.351 TEST_HEADER include/spdk/base64.h 00:05:11.351 TEST_HEADER include/spdk/bdev.h 00:05:11.351 TEST_HEADER include/spdk/bdev_module.h 00:05:11.351 TEST_HEADER include/spdk/bdev_zone.h 00:05:11.351 TEST_HEADER include/spdk/bit_array.h 00:05:11.351 TEST_HEADER include/spdk/bit_pool.h 00:05:11.351 TEST_HEADER include/spdk/blob.h 00:05:11.351 TEST_HEADER include/spdk/blob_bdev.h 00:05:11.351 TEST_HEADER include/spdk/blobfs.h 00:05:11.351 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:11.351 TEST_HEADER include/spdk/conf.h 00:05:11.610 TEST_HEADER include/spdk/config.h 00:05:11.610 TEST_HEADER include/spdk/cpuset.h 00:05:11.610 TEST_HEADER include/spdk/crc16.h 00:05:11.610 TEST_HEADER include/spdk/crc32.h 00:05:11.610 LINK bdev_svc 00:05:11.610 TEST_HEADER include/spdk/crc64.h 00:05:11.610 TEST_HEADER include/spdk/dif.h 00:05:11.610 TEST_HEADER include/spdk/dma.h 00:05:11.610 TEST_HEADER include/spdk/endian.h 00:05:11.610 TEST_HEADER include/spdk/env.h 00:05:11.610 TEST_HEADER include/spdk/env_dpdk.h 00:05:11.610 TEST_HEADER include/spdk/event.h 00:05:11.610 TEST_HEADER include/spdk/fd.h 00:05:11.610 TEST_HEADER include/spdk/fd_group.h 00:05:11.610 TEST_HEADER include/spdk/file.h 00:05:11.610 TEST_HEADER include/spdk/ftl.h 00:05:11.610 TEST_HEADER include/spdk/gpt_spec.h 00:05:11.610 TEST_HEADER include/spdk/hexlify.h 00:05:11.610 TEST_HEADER include/spdk/histogram_data.h 00:05:11.610 TEST_HEADER include/spdk/idxd.h 00:05:11.610 TEST_HEADER include/spdk/idxd_spec.h 00:05:11.610 TEST_HEADER include/spdk/init.h 00:05:11.610 TEST_HEADER include/spdk/ioat.h 00:05:11.610 TEST_HEADER include/spdk/ioat_spec.h 00:05:11.610 TEST_HEADER include/spdk/iscsi_spec.h 00:05:11.610 CC examples/sock/hello_world/hello_sock.o 00:05:11.610 TEST_HEADER include/spdk/json.h 00:05:11.610 TEST_HEADER include/spdk/jsonrpc.h 00:05:11.610 TEST_HEADER include/spdk/keyring.h 00:05:11.610 TEST_HEADER include/spdk/keyring_module.h 00:05:11.610 TEST_HEADER include/spdk/likely.h 00:05:11.610 TEST_HEADER include/spdk/log.h 00:05:11.610 TEST_HEADER include/spdk/lvol.h 00:05:11.610 TEST_HEADER include/spdk/memory.h 00:05:11.610 TEST_HEADER include/spdk/mmio.h 00:05:11.610 TEST_HEADER include/spdk/nbd.h 00:05:11.610 LINK test_dma 00:05:11.610 TEST_HEADER include/spdk/net.h 00:05:11.610 TEST_HEADER include/spdk/notify.h 00:05:11.610 TEST_HEADER include/spdk/nvme.h 00:05:11.610 TEST_HEADER include/spdk/nvme_intel.h 00:05:11.610 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:11.610 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:11.610 TEST_HEADER include/spdk/nvme_spec.h 00:05:11.610 TEST_HEADER include/spdk/nvme_zns.h 00:05:11.610 CC examples/vmd/lsvmd/lsvmd.o 00:05:11.610 TEST_HEADER include/spdk/nvmf.h 00:05:11.610 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:11.610 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:11.610 TEST_HEADER include/spdk/nvmf_spec.h 00:05:11.610 CC test/env/mem_callbacks/mem_callbacks.o 00:05:11.610 TEST_HEADER include/spdk/nvmf_transport.h 00:05:11.610 TEST_HEADER include/spdk/opal.h 00:05:11.610 TEST_HEADER include/spdk/opal_spec.h 00:05:11.610 TEST_HEADER include/spdk/pci_ids.h 00:05:11.610 TEST_HEADER include/spdk/pipe.h 00:05:11.610 TEST_HEADER include/spdk/queue.h 00:05:11.610 TEST_HEADER include/spdk/reduce.h 00:05:11.610 CC test/env/vtophys/vtophys.o 00:05:11.610 TEST_HEADER include/spdk/rpc.h 00:05:11.610 TEST_HEADER include/spdk/scheduler.h 00:05:11.610 TEST_HEADER include/spdk/scsi.h 00:05:11.610 TEST_HEADER include/spdk/scsi_spec.h 00:05:11.610 TEST_HEADER include/spdk/sock.h 00:05:11.610 TEST_HEADER include/spdk/stdinc.h 00:05:11.610 TEST_HEADER include/spdk/string.h 00:05:11.610 TEST_HEADER include/spdk/thread.h 00:05:11.610 TEST_HEADER include/spdk/trace.h 00:05:11.610 TEST_HEADER include/spdk/trace_parser.h 00:05:11.610 TEST_HEADER include/spdk/tree.h 00:05:11.610 TEST_HEADER include/spdk/ublk.h 00:05:11.610 TEST_HEADER include/spdk/util.h 00:05:11.610 TEST_HEADER include/spdk/uuid.h 00:05:11.610 TEST_HEADER include/spdk/version.h 00:05:11.610 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:11.610 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:11.610 TEST_HEADER include/spdk/vhost.h 00:05:11.610 TEST_HEADER include/spdk/vmd.h 00:05:11.610 TEST_HEADER include/spdk/xor.h 00:05:11.610 TEST_HEADER include/spdk/zipf.h 00:05:11.610 CXX test/cpp_headers/accel.o 00:05:11.610 LINK thread 00:05:11.610 LINK lsvmd 00:05:11.610 LINK spdk_lock 00:05:11.610 LINK hello_sock 00:05:11.610 LINK vtophys 00:05:11.610 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:11.610 CC examples/vmd/led/led.o 00:05:11.610 CXX test/cpp_headers/accel_module.o 00:05:11.610 CC app/spdk_nvme_discover/discovery_aer.o 00:05:11.610 LINK led 00:05:11.610 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:11.610 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:11.869 CC test/app/histogram_perf/histogram_perf.o 00:05:11.869 LINK nvme_fuzz 00:05:11.869 LINK histogram_perf 00:05:11.869 LINK spdk_nvme_discover 00:05:11.869 CC examples/idxd/perf/perf.o 00:05:11.869 LINK env_dpdk_post_init 00:05:11.869 CC test/app/jsoncat/jsoncat.o 00:05:11.869 CXX test/cpp_headers/assert.o 00:05:11.869 CXX test/cpp_headers/barrier.o 00:05:11.869 CC app/spdk_top/spdk_top.o 00:05:11.869 LINK jsoncat 00:05:11.869 CC test/app/stub/stub.o 00:05:11.869 LINK idxd_perf 00:05:11.869 CC test/rpc_client/rpc_client_test.o 00:05:11.869 LINK spdk_trace 00:05:12.127 CXX test/cpp_headers/base64.o 00:05:12.127 LINK stub 00:05:12.127 CC app/fio/nvme/fio_plugin.o 00:05:12.127 LINK rpc_client_test 00:05:12.127 CC examples/accel/perf/accel_perf.o 00:05:12.127 LINK mem_callbacks 00:05:12.127 CC app/fio/bdev/fio_plugin.o 00:05:12.127 CC examples/blob/hello_world/hello_blob.o 00:05:12.127 CC test/env/memory/memory_ut.o 00:05:12.127 LINK spdk_top 00:05:12.127 CC examples/blob/cli/blobcli.o 00:05:12.127 LINK iscsi_fuzz 00:05:12.127 CXX test/cpp_headers/bdev.o 00:05:12.127 LINK accel_perf 00:05:12.127 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:05:12.127 fio_plugin.c:1584:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:05:12.127 struct spdk_nvme_fdp_ruhs ruhs; 00:05:12.127 ^ 00:05:12.386 CXX test/cpp_headers/bdev_module.o 00:05:12.386 LINK hello_blob 00:05:12.386 LINK histogram_ut 00:05:12.386 1 warning generated. 00:05:12.386 LINK spdk_nvme 00:05:12.386 CC test/accel/dif/dif.o 00:05:12.386 LINK spdk_bdev 00:05:12.386 CC test/unit/lib/log/log.c/log_ut.o 00:05:12.386 LINK blobcli 00:05:12.386 CC test/env/pci/pci_ut.o 00:05:12.386 CXX test/cpp_headers/bdev_zone.o 00:05:12.386 LINK log_ut 00:05:12.386 CC examples/nvme/hello_world/hello_world.o 00:05:12.386 LINK dif 00:05:12.386 CC test/blobfs/mkfs/mkfs.o 00:05:12.386 CC test/event/event_perf/event_perf.o 00:05:12.386 gmake[2]: Nothing to be done for 'all'. 00:05:12.386 LINK pci_ut 00:05:12.644 CC test/unit/lib/rdma/common.c/common_ut.o 00:05:12.644 CC test/event/reactor/reactor.o 00:05:12.644 CXX test/cpp_headers/bit_array.o 00:05:12.644 LINK event_perf 00:05:12.644 CC examples/bdev/hello_world/hello_bdev.o 00:05:12.644 LINK hello_world 00:05:12.644 CC examples/nvme/reconnect/reconnect.o 00:05:12.644 LINK reactor 00:05:12.644 CC test/event/reactor_perf/reactor_perf.o 00:05:12.644 LINK mkfs 00:05:12.644 CXX test/cpp_headers/bit_pool.o 00:05:12.644 LINK reactor_perf 00:05:12.644 CC examples/bdev/bdevperf/bdevperf.o 00:05:12.644 LINK hello_bdev 00:05:12.644 CC test/unit/lib/util/base64.c/base64_ut.o 00:05:12.645 LINK reconnect 00:05:12.645 LINK common_ut 00:05:12.645 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:05:12.903 CC test/nvme/aer/aer.o 00:05:12.903 CXX test/cpp_headers/blob.o 00:05:12.903 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:12.903 CXX test/cpp_headers/blob_bdev.o 00:05:12.903 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:05:12.903 LINK base64_ut 00:05:12.903 LINK memory_ut 00:05:12.903 CXX test/cpp_headers/blobfs.o 00:05:12.903 CC test/bdev/bdevio/bdevio.o 00:05:12.903 CC examples/nvme/arbitration/arbitration.o 00:05:12.903 LINK aer 00:05:12.903 LINK cpuset_ut 00:05:12.903 LINK bit_array_ut 00:05:12.903 LINK bdevperf 00:05:12.903 LINK nvme_manage 00:05:12.903 CC test/nvme/reset/reset.o 00:05:12.903 CC examples/nvme/hotplug/hotplug.o 00:05:12.903 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:05:12.903 CXX test/cpp_headers/blobfs_bdev.o 00:05:12.903 LINK arbitration 00:05:13.183 CC test/unit/lib/dma/dma.c/dma_ut.o 00:05:13.183 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:05:13.183 LINK bdevio 00:05:13.183 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:05:13.183 CXX test/cpp_headers/conf.o 00:05:13.183 LINK crc16_ut 00:05:13.183 LINK reset 00:05:13.183 LINK hotplug 00:05:13.183 LINK crc32_ieee_ut 00:05:13.183 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:05:13.183 CC test/nvme/sgl/sgl.o 00:05:13.183 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:13.183 CXX test/cpp_headers/config.o 00:05:13.183 CXX test/cpp_headers/cpuset.o 00:05:13.183 CC test/nvme/e2edp/nvme_dp.o 00:05:13.183 LINK ioat_ut 00:05:13.183 CXX test/cpp_headers/crc16.o 00:05:13.183 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:05:13.183 CC test/nvme/overhead/overhead.o 00:05:13.183 LINK crc32c_ut 00:05:13.183 LINK cmb_copy 00:05:13.183 LINK dma_ut 00:05:13.440 CXX test/cpp_headers/crc32.o 00:05:13.440 LINK sgl 00:05:13.440 CC test/nvme/err_injection/err_injection.o 00:05:13.440 LINK crc64_ut 00:05:13.440 CC examples/nvme/abort/abort.o 00:05:13.440 LINK nvme_dp 00:05:13.440 CC test/unit/lib/util/dif.c/dif_ut.o 00:05:13.440 LINK overhead 00:05:13.440 CC test/nvme/startup/startup.o 00:05:13.440 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:13.440 LINK err_injection 00:05:13.440 CC test/unit/lib/util/file.c/file_ut.o 00:05:13.440 CC test/nvme/reserve/reserve.o 00:05:13.440 CXX test/cpp_headers/crc64.o 00:05:13.440 LINK abort 00:05:13.440 LINK startup 00:05:13.440 CC test/unit/lib/util/iov.c/iov_ut.o 00:05:13.440 LINK file_ut 00:05:13.440 LINK pmr_persistence 00:05:13.440 CC test/unit/lib/util/math.c/math_ut.o 00:05:13.440 CC test/unit/lib/util/net.c/net_ut.o 00:05:13.440 CXX test/cpp_headers/dif.o 00:05:13.440 LINK reserve 00:05:13.697 CC test/nvme/simple_copy/simple_copy.o 00:05:13.697 LINK iov_ut 00:05:13.697 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:05:13.697 LINK math_ut 00:05:13.697 LINK net_ut 00:05:13.697 CXX test/cpp_headers/dma.o 00:05:13.697 CXX test/cpp_headers/endian.o 00:05:13.697 CC test/nvme/connect_stress/connect_stress.o 00:05:13.697 CC test/nvme/compliance/nvme_compliance.o 00:05:13.697 CC test/nvme/boot_partition/boot_partition.o 00:05:13.697 CC examples/nvmf/nvmf/nvmf.o 00:05:13.697 LINK simple_copy 00:05:13.697 CC test/unit/lib/util/string.c/string_ut.o 00:05:13.697 LINK dif_ut 00:05:13.697 CXX test/cpp_headers/env.o 00:05:13.697 LINK connect_stress 00:05:13.697 LINK boot_partition 00:05:13.697 LINK pipe_ut 00:05:13.697 CC test/unit/lib/util/xor.c/xor_ut.o 00:05:13.697 CC test/nvme/fused_ordering/fused_ordering.o 00:05:13.697 CXX test/cpp_headers/env_dpdk.o 00:05:13.697 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:13.955 CXX test/cpp_headers/event.o 00:05:13.955 LINK nvmf 00:05:13.955 LINK string_ut 00:05:13.955 LINK nvme_compliance 00:05:13.955 CC test/nvme/fdp/fdp.o 00:05:13.955 CXX test/cpp_headers/fd.o 00:05:13.955 CXX test/cpp_headers/fd_group.o 00:05:13.955 LINK fused_ordering 00:05:13.955 LINK doorbell_aers 00:05:13.955 CXX test/cpp_headers/file.o 00:05:13.955 CXX test/cpp_headers/ftl.o 00:05:13.955 LINK xor_ut 00:05:13.955 CXX test/cpp_headers/gpt_spec.o 00:05:13.955 LINK fdp 00:05:13.955 CXX test/cpp_headers/hexlify.o 00:05:13.955 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:05:13.955 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:05:13.955 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:05:13.955 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:05:13.955 CXX test/cpp_headers/histogram_data.o 00:05:14.212 CXX test/cpp_headers/idxd.o 00:05:14.212 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:05:14.212 CXX test/cpp_headers/idxd_spec.o 00:05:14.212 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:05:14.212 CXX test/cpp_headers/init.o 00:05:14.212 LINK pci_event_ut 00:05:14.212 CXX test/cpp_headers/ioat.o 00:05:14.212 CXX test/cpp_headers/ioat_spec.o 00:05:14.212 LINK json_util_ut 00:05:14.212 CXX test/cpp_headers/iscsi_spec.o 00:05:14.212 CXX test/cpp_headers/json.o 00:05:14.212 CXX test/cpp_headers/jsonrpc.o 00:05:14.212 LINK idxd_user_ut 00:05:14.212 CXX test/cpp_headers/keyring.o 00:05:14.468 CXX test/cpp_headers/keyring_module.o 00:05:14.468 CXX test/cpp_headers/likely.o 00:05:14.468 CXX test/cpp_headers/log.o 00:05:14.468 LINK idxd_ut 00:05:14.468 CXX test/cpp_headers/lvol.o 00:05:14.468 CXX test/cpp_headers/memory.o 00:05:14.468 LINK json_write_ut 00:05:14.468 CXX test/cpp_headers/mmio.o 00:05:14.468 CXX test/cpp_headers/nbd.o 00:05:14.468 CXX test/cpp_headers/net.o 00:05:14.468 CXX test/cpp_headers/notify.o 00:05:14.468 CXX test/cpp_headers/nvme.o 00:05:14.468 CXX test/cpp_headers/nvme_intel.o 00:05:14.468 CXX test/cpp_headers/nvme_ocssd.o 00:05:14.468 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:14.468 CXX test/cpp_headers/nvme_spec.o 00:05:14.468 LINK json_parse_ut 00:05:14.468 CXX test/cpp_headers/nvme_zns.o 00:05:14.468 CXX test/cpp_headers/nvmf.o 00:05:14.725 CXX test/cpp_headers/nvmf_cmd.o 00:05:14.725 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:14.725 CXX test/cpp_headers/nvmf_spec.o 00:05:14.725 CXX test/cpp_headers/nvmf_transport.o 00:05:14.725 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:05:14.725 CXX test/cpp_headers/opal.o 00:05:14.725 CXX test/cpp_headers/opal_spec.o 00:05:14.725 CXX test/cpp_headers/pci_ids.o 00:05:14.725 CXX test/cpp_headers/pipe.o 00:05:14.725 CXX test/cpp_headers/queue.o 00:05:14.725 CXX test/cpp_headers/reduce.o 00:05:14.725 LINK jsonrpc_server_ut 00:05:14.725 CXX test/cpp_headers/rpc.o 00:05:14.725 CXX test/cpp_headers/scheduler.o 00:05:14.725 CXX test/cpp_headers/scsi.o 00:05:14.725 CXX test/cpp_headers/scsi_spec.o 00:05:14.983 CXX test/cpp_headers/sock.o 00:05:14.983 CXX test/cpp_headers/stdinc.o 00:05:14.983 CXX test/cpp_headers/string.o 00:05:14.983 CXX test/cpp_headers/thread.o 00:05:14.983 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:05:14.983 CXX test/cpp_headers/trace.o 00:05:14.983 CXX test/cpp_headers/trace_parser.o 00:05:14.983 CXX test/cpp_headers/tree.o 00:05:14.983 CXX test/cpp_headers/ublk.o 00:05:14.983 CXX test/cpp_headers/util.o 00:05:14.983 CXX test/cpp_headers/uuid.o 00:05:14.983 CXX test/cpp_headers/version.o 00:05:14.983 CXX test/cpp_headers/vfio_user_pci.o 00:05:14.983 CXX test/cpp_headers/vfio_user_spec.o 00:05:14.983 CXX test/cpp_headers/vhost.o 00:05:14.983 CXX test/cpp_headers/vmd.o 00:05:14.983 CXX test/cpp_headers/xor.o 00:05:14.983 CXX test/cpp_headers/zipf.o 00:05:15.240 LINK rpc_ut 00:05:15.240 CC test/unit/lib/sock/sock.c/sock_ut.o 00:05:15.240 CC test/unit/lib/sock/posix.c/posix_ut.o 00:05:15.240 CC test/unit/lib/thread/thread.c/thread_ut.o 00:05:15.240 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:05:15.240 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:05:15.240 CC test/unit/lib/notify/notify.c/notify_ut.o 00:05:15.498 LINK keyring_ut 00:05:15.498 LINK notify_ut 00:05:15.755 LINK posix_ut 00:05:15.755 LINK iobuf_ut 00:05:15.755 LINK thread_ut 00:05:15.755 LINK sock_ut 00:05:15.755 CC test/unit/lib/accel/accel.c/accel_ut.o 00:05:15.755 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:05:15.755 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:05:16.013 CC test/unit/lib/blob/blob.c/blob_ut.o 00:05:16.013 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:05:16.013 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:05:16.013 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:05:16.013 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:05:16.013 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:05:16.013 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:05:16.013 LINK rpc_ut 00:05:16.013 LINK subsystem_ut 00:05:16.013 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:05:16.271 LINK blob_bdev_ut 00:05:16.271 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:05:16.271 CC test/unit/lib/event/app.c/app_ut.o 00:05:16.529 LINK app_ut 00:05:16.529 LINK accel_ut 00:05:16.529 LINK nvme_ns_ut 00:05:16.529 LINK nvme_ctrlr_ocssd_cmd_ut 00:05:16.529 LINK nvme_ctrlr_cmd_ut 00:05:16.529 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:05:16.529 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:05:16.529 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:05:16.529 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:05:16.529 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:05:16.786 LINK nvme_ut 00:05:16.786 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:05:16.786 LINK reactor_ut 00:05:17.044 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:05:17.044 LINK nvme_ns_ocssd_cmd_ut 00:05:17.044 LINK nvme_ns_cmd_ut 00:05:17.044 LINK nvme_ctrlr_ut 00:05:17.044 CC test/unit/lib/bdev/part.c/part_ut.o 00:05:17.044 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:05:17.044 LINK nvme_quirks_ut 00:05:17.044 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:05:17.302 LINK scsi_nvme_ut 00:05:17.302 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:05:17.302 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:05:17.302 LINK nvme_poll_group_ut 00:05:17.302 LINK nvme_qpair_ut 00:05:17.302 LINK gpt_ut 00:05:17.302 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:05:17.302 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:05:17.561 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:05:17.561 LINK blob_ut 00:05:17.561 LINK nvme_pcie_ut 00:05:17.561 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:05:17.561 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:05:17.819 LINK bdev_zone_ut 00:05:17.819 LINK vbdev_lvol_ut 00:05:17.819 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:05:17.819 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:05:17.819 LINK bdev_raid_sb_ut 00:05:17.819 LINK part_ut 00:05:17.819 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:05:17.819 LINK nvme_transport_ut 00:05:18.077 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:05:18.077 LINK bdev_ut 00:05:18.077 LINK bdev_raid_ut 00:05:18.077 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:05:18.077 LINK nvme_tcp_ut 00:05:18.077 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:05:18.077 LINK nvme_io_msg_ut 00:05:18.077 LINK tree_ut 00:05:18.077 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:05:18.077 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:05:18.077 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:05:18.077 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:05:18.335 LINK vbdev_zone_block_ut 00:05:18.335 LINK bdev_ut 00:05:18.335 LINK concat_ut 00:05:18.335 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:05:18.335 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:05:18.335 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:05:18.335 LINK raid1_ut 00:05:18.335 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:05:18.335 LINK blobfs_sync_ut 00:05:18.335 LINK blobfs_bdev_ut 00:05:18.593 LINK blobfs_async_ut 00:05:18.593 LINK nvme_fabric_ut 00:05:18.593 LINK nvme_pcie_common_ut 00:05:18.593 LINK raid0_ut 00:05:18.593 LINK nvme_opal_ut 00:05:18.593 LINK lvol_ut 00:05:19.159 LINK bdev_nvme_ut 00:05:19.417 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:05:19.417 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:05:19.417 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:05:19.417 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:05:19.417 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:05:19.417 LINK nvme_rdma_ut 00:05:19.417 LINK dev_ut 00:05:19.676 LINK scsi_ut 00:05:19.676 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:05:19.676 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:05:19.676 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:05:19.676 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:05:19.676 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:05:19.676 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:05:19.676 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:05:19.676 LINK lun_ut 00:05:19.676 LINK scsi_pr_ut 00:05:19.676 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:05:19.676 LINK scsi_bdev_ut 00:05:19.676 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:05:19.935 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:05:19.935 LINK ctrlr_bdev_ut 00:05:19.935 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:05:19.935 LINK auth_ut 00:05:20.193 LINK nvmf_ut 00:05:20.193 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:05:20.193 LINK ctrlr_discovery_ut 00:05:20.193 CC test/unit/lib/iscsi/param.c/param_ut.o 00:05:20.193 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:05:20.193 LINK init_grp_ut 00:05:20.193 LINK conn_ut 00:05:20.193 LINK subsystem_ut 00:05:20.194 LINK ctrlr_ut 00:05:20.194 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:05:20.451 LINK transport_ut 00:05:20.451 LINK param_ut 00:05:20.451 LINK rdma_ut 00:05:20.451 LINK tcp_ut 00:05:20.710 LINK portal_grp_ut 00:05:20.710 LINK tgt_node_ut 00:05:20.710 LINK iscsi_ut 00:05:20.710 00:05:20.710 real 1m2.987s 00:05:20.710 user 4m21.879s 00:05:20.710 sys 0m45.500s 00:05:20.710 06:18:33 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:05:20.710 06:18:33 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:05:20.710 ************************************ 00:05:20.710 END TEST unittest_build 00:05:20.710 ************************************ 00:05:20.979 06:18:33 -- common/autotest_common.sh@1142 -- $ return 0 00:05:20.979 06:18:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:20.979 06:18:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:20.979 06:18:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:20.979 06:18:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.979 06:18:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:20.979 06:18:33 -- pm/common@44 -- $ pid=1276 00:05:20.979 06:18:33 -- pm/common@50 -- $ kill -TERM 1276 00:05:20.979 06:18:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:20.979 06:18:33 -- nvmf/common.sh@7 -- # uname -s 00:05:20.979 06:18:33 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:05:20.979 06:18:33 -- nvmf/common.sh@7 -- # return 0 00:05:20.979 06:18:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:20.979 06:18:33 -- spdk/autotest.sh@32 -- # uname -s 00:05:20.979 06:18:33 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:05:20.979 06:18:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:20.979 06:18:33 -- pm/common@17 -- # local monitor 00:05:20.979 06:18:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.979 06:18:33 -- pm/common@25 -- # sleep 1 00:05:20.979 06:18:33 -- pm/common@21 -- # date +%s 00:05:20.979 06:18:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721715513 00:05:20.979 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721715513_collect-vmstat.pm.log 00:05:22.361 06:18:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:22.361 06:18:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:22.361 06:18:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.361 06:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:22.361 06:18:34 -- spdk/autotest.sh@59 -- # create_test_list 00:05:22.361 06:18:34 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:22.361 06:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:22.361 06:18:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:22.361 06:18:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:22.361 06:18:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:22.361 06:18:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:22.361 06:18:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:22.361 06:18:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:22.361 06:18:34 -- common/autotest_common.sh@1455 -- # uname 00:05:22.361 06:18:34 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:05:22.361 06:18:34 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:05:22.361 kldunload: can't find file contigmem.ko 00:05:22.361 06:18:34 -- common/autotest_common.sh@1456 -- # true 00:05:22.361 06:18:34 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:05:22.361 06:18:34 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:05:22.361 06:18:34 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:05:22.361 06:18:34 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:05:22.361 06:18:34 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:05:22.361 06:18:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:22.361 06:18:34 -- common/autotest_common.sh@1475 -- # uname 00:05:22.361 06:18:34 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:05:22.361 06:18:34 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:05:22.361 06:18:34 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:05:22.361 06:18:34 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:05:22.361 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:05:22.361 06:18:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:22.361 06:18:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:05:22.361 06:18:34 -- spdk/autotest.sh@72 -- # hash lcov 00:05:22.361 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:05:22.361 06:18:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:22.361 06:18:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.361 06:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:22.361 06:18:34 -- spdk/autotest.sh@91 -- # rm -f 00:05:22.361 06:18:34 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.361 kldunload: can't find file contigmem.ko 00:05:22.361 kldunload: can't find file nic_uio.ko 00:05:22.361 06:18:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:22.361 06:18:34 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:22.361 06:18:34 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:22.361 06:18:34 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:22.361 06:18:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:22.361 06:18:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.361 06:18:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:22.361 06:18:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:05:22.361 06:18:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:05:22.361 06:18:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:05:22.361 nvme0ns1 is not a block device 00:05:22.361 06:18:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:05:22.361 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:05:22.361 06:18:34 -- scripts/common.sh@391 -- # pt= 00:05:22.361 06:18:34 -- scripts/common.sh@392 -- # return 1 00:05:22.361 06:18:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:05:22.361 1+0 records in 00:05:22.361 1+0 records out 00:05:22.361 1048576 bytes transferred in 0.006164 secs (170104524 bytes/sec) 00:05:22.361 06:18:34 -- spdk/autotest.sh@118 -- # sync 00:05:22.927 06:18:35 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:22.927 06:18:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:22.927 06:18:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:23.493 06:18:35 -- spdk/autotest.sh@124 -- # uname -s 00:05:23.493 06:18:35 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:05:23.493 06:18:35 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.751 Contigmem (not present) 00:05:23.751 Buffer Size: not set 00:05:23.751 Num Buffers: not set 00:05:23.751 00:05:23.751 00:05:23.751 Type BDF Vendor Device Driver 00:05:23.751 NVMe 0:16:0 0x1b36 0x0010 nvme0 00:05:23.751 06:18:36 -- spdk/autotest.sh@130 -- # uname -s 00:05:23.751 06:18:36 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:05:23.752 06:18:36 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:23.752 06:18:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.752 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.752 06:18:36 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:23.752 06:18:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.752 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.752 06:18:36 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.752 hw.nic_uio.bdfs="0:16:0" 00:05:23.752 hw.contigmem.num_buffers="8" 00:05:23.752 hw.contigmem.buffer_size="268435456" 00:05:24.318 06:18:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.318 06:18:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.318 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.318 06:18:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.318 06:18:36 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:24.588 06:18:36 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.588 06:18:36 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:24.588 06:18:36 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:24.588 06:18:36 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:24.588 06:18:36 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.588 06:18:36 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.588 06:18:36 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.588 06:18:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.588 06:18:36 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.588 06:18:36 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.588 06:18:36 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:24.588 06:18:36 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:24.588 06:18:36 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:24.588 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:05:24.588 06:18:36 -- common/autotest_common.sh@1580 -- # device= 00:05:24.588 06:18:36 -- common/autotest_common.sh@1580 -- # true 00:05:24.588 06:18:36 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:05:24.588 06:18:36 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:24.588 06:18:36 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:24.588 06:18:36 -- common/autotest_common.sh@1593 -- # return 0 00:05:24.588 06:18:36 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:24.588 06:18:36 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.588 06:18:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.588 06:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.588 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.588 ************************************ 00:05:24.588 START TEST unittest 00:05:24.588 ************************************ 00:05:24.588 06:18:36 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.588 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.588 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.588 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:24.588 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.588 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:24.588 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:24.588 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:24.588 ++ rpc_py=rpc_cmd 00:05:24.588 ++ set -e 00:05:24.588 ++ shopt -s nullglob 00:05:24.588 ++ shopt -s extglob 00:05:24.588 ++ shopt -s inherit_errexit 00:05:24.588 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:24.588 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:24.588 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:24.588 +++ CONFIG_WPDK_DIR= 00:05:24.588 +++ CONFIG_ASAN=n 00:05:24.588 +++ CONFIG_VBDEV_COMPRESS=n 00:05:24.588 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:24.588 +++ CONFIG_USDT=n 00:05:24.588 +++ CONFIG_CUSTOMOCF=n 00:05:24.588 +++ CONFIG_PREFIX=/usr/local 00:05:24.588 +++ CONFIG_RBD=n 00:05:24.588 +++ CONFIG_LIBDIR= 00:05:24.588 +++ CONFIG_IDXD=y 00:05:24.588 +++ CONFIG_NVME_CUSE=n 00:05:24.588 +++ CONFIG_SMA=n 00:05:24.588 +++ CONFIG_VTUNE=n 00:05:24.588 +++ CONFIG_TSAN=n 00:05:24.588 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:24.588 +++ CONFIG_VFIO_USER_DIR= 00:05:24.588 +++ CONFIG_PGO_CAPTURE=n 00:05:24.588 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:05:24.588 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:24.588 +++ CONFIG_LTO=n 00:05:24.588 +++ CONFIG_ISCSI_INITIATOR=n 00:05:24.588 +++ CONFIG_CET=n 00:05:24.588 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:24.589 +++ CONFIG_OCF_PATH= 00:05:24.589 +++ CONFIG_RDMA_SET_TOS=y 00:05:24.589 +++ CONFIG_HAVE_ARC4RANDOM=y 00:05:24.589 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:24.589 +++ CONFIG_UBLK=n 00:05:24.589 +++ CONFIG_ISAL_CRYPTO=y 00:05:24.589 +++ CONFIG_OPENSSL_PATH= 00:05:24.589 +++ CONFIG_OCF=n 00:05:24.589 +++ CONFIG_FUSE=n 00:05:24.589 +++ CONFIG_VTUNE_DIR= 00:05:24.589 +++ CONFIG_FUZZER_LIB= 00:05:24.589 +++ CONFIG_FUZZER=n 00:05:24.589 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:24.589 +++ CONFIG_CRYPTO=n 00:05:24.589 +++ CONFIG_PGO_USE=n 00:05:24.589 +++ CONFIG_VHOST=n 00:05:24.589 +++ CONFIG_DAOS=n 00:05:24.589 +++ CONFIG_DPDK_INC_DIR= 00:05:24.589 +++ CONFIG_DAOS_DIR= 00:05:24.589 +++ CONFIG_UNIT_TESTS=y 00:05:24.589 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:05:24.589 +++ CONFIG_VIRTIO=n 00:05:24.589 +++ CONFIG_DPDK_UADK=n 00:05:24.589 +++ CONFIG_COVERAGE=n 00:05:24.589 +++ CONFIG_RDMA=y 00:05:24.589 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:24.589 +++ CONFIG_URING_PATH= 00:05:24.589 +++ CONFIG_XNVME=n 00:05:24.589 +++ CONFIG_VFIO_USER=n 00:05:24.589 +++ CONFIG_ARCH=native 00:05:24.589 +++ CONFIG_HAVE_EVP_MAC=y 00:05:24.589 +++ CONFIG_URING_ZNS=n 00:05:24.589 +++ CONFIG_WERROR=y 00:05:24.589 +++ CONFIG_HAVE_LIBBSD=n 00:05:24.589 +++ CONFIG_UBSAN=n 00:05:24.589 +++ CONFIG_IPSEC_MB_DIR= 00:05:24.589 +++ CONFIG_GOLANG=n 00:05:24.589 +++ CONFIG_ISAL=y 00:05:24.589 +++ CONFIG_IDXD_KERNEL=n 00:05:24.589 +++ CONFIG_DPDK_LIB_DIR= 00:05:24.589 +++ CONFIG_RDMA_PROV=verbs 00:05:24.589 +++ CONFIG_APPS=y 00:05:24.589 +++ CONFIG_SHARED=n 00:05:24.589 +++ CONFIG_HAVE_KEYUTILS=n 00:05:24.589 +++ CONFIG_FC_PATH= 00:05:24.589 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:24.589 +++ CONFIG_FC=n 00:05:24.589 +++ CONFIG_AVAHI=n 00:05:24.589 +++ CONFIG_FIO_PLUGIN=y 00:05:24.589 +++ CONFIG_RAID5F=n 00:05:24.589 +++ CONFIG_EXAMPLES=y 00:05:24.589 +++ CONFIG_TESTS=y 00:05:24.589 +++ CONFIG_CRYPTO_MLX5=n 00:05:24.589 +++ CONFIG_MAX_LCORES=128 00:05:24.589 +++ CONFIG_IPSEC_MB=n 00:05:24.589 +++ CONFIG_PGO_DIR= 00:05:24.589 +++ CONFIG_DEBUG=y 00:05:24.589 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:24.589 +++ CONFIG_CROSS_PREFIX= 00:05:24.589 +++ CONFIG_URING=n 00:05:24.589 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:24.589 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:24.589 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:24.589 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:24.589 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:24.589 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:24.589 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:24.589 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:24.589 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:24.589 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:24.589 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:24.589 +++ VHOST_APP=("$_app_dir/vhost") 00:05:24.589 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:24.589 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:24.589 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:24.589 +++ [[ #ifndef SPDK_CONFIG_H 00:05:24.589 #define SPDK_CONFIG_H 00:05:24.589 #define SPDK_CONFIG_APPS 1 00:05:24.589 #define SPDK_CONFIG_ARCH native 00:05:24.589 #undef SPDK_CONFIG_ASAN 00:05:24.589 #undef SPDK_CONFIG_AVAHI 00:05:24.589 #undef SPDK_CONFIG_CET 00:05:24.589 #undef SPDK_CONFIG_COVERAGE 00:05:24.589 #define SPDK_CONFIG_CROSS_PREFIX 00:05:24.589 #undef SPDK_CONFIG_CRYPTO 00:05:24.589 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:24.589 #undef SPDK_CONFIG_CUSTOMOCF 00:05:24.589 #undef SPDK_CONFIG_DAOS 00:05:24.589 #define SPDK_CONFIG_DAOS_DIR 00:05:24.589 #define SPDK_CONFIG_DEBUG 1 00:05:24.589 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:24.589 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:24.589 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:24.589 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:24.589 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:24.589 #undef SPDK_CONFIG_DPDK_UADK 00:05:24.589 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:24.589 #define SPDK_CONFIG_EXAMPLES 1 00:05:24.589 #undef SPDK_CONFIG_FC 00:05:24.589 #define SPDK_CONFIG_FC_PATH 00:05:24.589 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:24.589 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:24.589 #undef SPDK_CONFIG_FUSE 00:05:24.589 #undef SPDK_CONFIG_FUZZER 00:05:24.589 #define SPDK_CONFIG_FUZZER_LIB 00:05:24.589 #undef SPDK_CONFIG_GOLANG 00:05:24.589 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:24.589 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:24.589 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:24.589 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:24.589 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:24.589 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:24.589 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:05:24.589 #define SPDK_CONFIG_IDXD 1 00:05:24.589 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:24.589 #undef SPDK_CONFIG_IPSEC_MB 00:05:24.589 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:24.589 #define SPDK_CONFIG_ISAL 1 00:05:24.589 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:24.589 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:05:24.589 #define SPDK_CONFIG_LIBDIR 00:05:24.589 #undef SPDK_CONFIG_LTO 00:05:24.589 #define SPDK_CONFIG_MAX_LCORES 128 00:05:24.589 #undef SPDK_CONFIG_NVME_CUSE 00:05:24.589 #undef SPDK_CONFIG_OCF 00:05:24.589 #define SPDK_CONFIG_OCF_PATH 00:05:24.589 #define SPDK_CONFIG_OPENSSL_PATH 00:05:24.589 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:24.589 #define SPDK_CONFIG_PGO_DIR 00:05:24.589 #undef SPDK_CONFIG_PGO_USE 00:05:24.589 #define SPDK_CONFIG_PREFIX /usr/local 00:05:24.589 #undef SPDK_CONFIG_RAID5F 00:05:24.589 #undef SPDK_CONFIG_RBD 00:05:24.589 #define SPDK_CONFIG_RDMA 1 00:05:24.589 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:24.589 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:24.589 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:05:24.589 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:24.589 #undef SPDK_CONFIG_SHARED 00:05:24.589 #undef SPDK_CONFIG_SMA 00:05:24.589 #define SPDK_CONFIG_TESTS 1 00:05:24.589 #undef SPDK_CONFIG_TSAN 00:05:24.589 #undef SPDK_CONFIG_UBLK 00:05:24.589 #undef SPDK_CONFIG_UBSAN 00:05:24.589 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:24.589 #undef SPDK_CONFIG_URING 00:05:24.589 #define SPDK_CONFIG_URING_PATH 00:05:24.589 #undef SPDK_CONFIG_URING_ZNS 00:05:24.589 #undef SPDK_CONFIG_USDT 00:05:24.589 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:24.589 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:24.589 #undef SPDK_CONFIG_VFIO_USER 00:05:24.589 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:24.589 #undef SPDK_CONFIG_VHOST 00:05:24.589 #undef SPDK_CONFIG_VIRTIO 00:05:24.589 #undef SPDK_CONFIG_VTUNE 00:05:24.589 #define SPDK_CONFIG_VTUNE_DIR 00:05:24.589 #define SPDK_CONFIG_WERROR 1 00:05:24.589 #define SPDK_CONFIG_WPDK_DIR 00:05:24.589 #undef SPDK_CONFIG_XNVME 00:05:24.589 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:24.589 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:24.589 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.589 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:24.589 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.589 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.589 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:05:24.589 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:05:24.589 ++++ export PATH 00:05:24.589 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:05:24.589 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:24.589 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:24.589 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:24.589 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:24.589 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:24.589 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:24.589 +++ TEST_TAG=N/A 00:05:24.589 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:24.589 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:24.589 ++++ uname -s 00:05:24.589 +++ PM_OS=FreeBSD 00:05:24.589 +++ MONITOR_RESOURCES_SUDO=() 00:05:24.589 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:24.589 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:24.589 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:24.589 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:24.589 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:24.589 +++ SUDO[0]= 00:05:24.590 +++ SUDO[1]='sudo -E' 00:05:24.590 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:24.590 +++ [[ FreeBSD == FreeBSD ]] 00:05:24.590 +++ MONITOR_RESOURCES=(collect-vmstat) 00:05:24.590 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:24.590 ++ : 1 00:05:24.590 ++ export RUN_NIGHTLY 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_RUN_VALGRIND 00:05:24.590 ++ : 1 00:05:24.590 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:24.590 ++ : 1 00:05:24.590 ++ export SPDK_TEST_UNITTEST 00:05:24.590 ++ : 00:05:24.590 ++ export SPDK_TEST_AUTOBUILD 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_RELEASE_BUILD 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_ISAL 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_ISCSI 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:24.590 ++ : 1 00:05:24.590 ++ export SPDK_TEST_NVME 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVME_PMR 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVME_BP 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVME_CLI 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVME_CUSE 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVME_FDP 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVMF 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_VFIOUSER 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_FUZZER 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_FUZZER_SHORT 00:05:24.590 ++ : rdma 00:05:24.590 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_RBD 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_VHOST 00:05:24.590 ++ : 1 00:05:24.590 ++ export SPDK_TEST_BLOCKDEV 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_IOAT 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_BLOBFS 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_VHOST_INIT 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_LVOL 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_RUN_ASAN 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_RUN_UBSAN 00:05:24.590 ++ : 00:05:24.590 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_RUN_NON_ROOT 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_CRYPTO 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_FTL 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_OCF 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_VMD 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_OPAL 00:05:24.590 ++ : 00:05:24.590 ++ export SPDK_TEST_NATIVE_DPDK 00:05:24.590 ++ : true 00:05:24.590 ++ export SPDK_AUTOTEST_X 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_RAID5 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_URING 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_USDT 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_USE_IGB_UIO 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_SCHEDULER 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_SCANBUILD 00:05:24.590 ++ : 00:05:24.590 ++ export SPDK_TEST_NVMF_NICS 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_SMA 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_DAOS 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_XNVME 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_ACCEL_DSA 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_ACCEL_IAA 00:05:24.590 ++ : 00:05:24.590 ++ export SPDK_TEST_FUZZER_TARGET 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_TEST_NVMF_MDNS 00:05:24.590 ++ : 0 00:05:24.590 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:24.590 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:24.590 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:24.590 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:24.590 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:24.590 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.590 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.590 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.590 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.590 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:24.590 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:24.590 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:24.590 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:24.590 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:24.590 ++ PYTHONDONTWRITEBYTECODE=1 00:05:24.590 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:24.590 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:24.590 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:24.590 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:24.590 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:24.590 ++ rm -rf /var/tmp/asan_suppression_file 00:05:24.590 ++ cat 00:05:24.590 ++ echo leak:libfuse3.so 00:05:24.590 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:24.590 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:24.590 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:24.590 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:24.590 ++ '[' -z /var/spdk/dependencies ']' 00:05:24.590 ++ export DEPENDENCY_DIR 00:05:24.590 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:24.590 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:24.590 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:24.590 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:24.590 ++ export QEMU_BIN= 00:05:24.590 ++ QEMU_BIN= 00:05:24.590 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:24.590 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:24.590 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:24.590 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:24.590 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:24.590 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:24.590 ++ '[' 0 -eq 0 ']' 00:05:24.590 ++ export valgrind= 00:05:24.590 ++ valgrind= 00:05:24.590 +++ uname -s 00:05:24.590 ++ '[' FreeBSD = Linux ']' 00:05:24.590 +++ uname -s 00:05:24.590 ++ '[' FreeBSD = FreeBSD ']' 00:05:24.590 ++ MAKE=gmake 00:05:24.590 +++ sysctl -a 00:05:24.590 +++ grep -E -i hw.ncpu 00:05:24.590 +++ awk '{print $2}' 00:05:24.590 ++ MAKEFLAGS=-j10 00:05:24.590 ++ HUGEMEM=2048 00:05:24.590 ++ export HUGEMEM=2048 00:05:24.590 ++ HUGEMEM=2048 00:05:24.590 ++ NO_HUGE=() 00:05:24.590 ++ TEST_MODE= 00:05:24.590 ++ [[ -z '' ]] 00:05:24.590 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:24.590 ++ exec 00:05:24.590 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:24.590 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:24.590 ++ set_test_storage 2147483648 00:05:24.590 ++ [[ -v testdir ]] 00:05:24.590 ++ local requested_size=2147483648 00:05:24.590 ++ local mount target_dir 00:05:24.590 ++ local -A mounts fss sizes avails uses 00:05:24.590 ++ local source fs size avail mount use 00:05:24.590 ++ local storage_fallback storage_candidates 00:05:24.590 +++ mktemp -udt spdk.XXXXXX 00:05:24.590 ++ storage_fallback=/tmp/spdk.XXXXXX.bH5qDQz4bX 00:05:24.590 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:24.590 ++ [[ -n '' ]] 00:05:24.590 ++ [[ -n '' ]] 00:05:24.590 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.bH5qDQz4bX/tests/unit /tmp/spdk.XXXXXX.bH5qDQz4bX 00:05:24.590 ++ requested_size=2214592512 00:05:24.590 ++ read -r source fs size use avail _ mount 00:05:24.590 +++ df -T 00:05:24.590 +++ grep -v Filesystem 00:05:24.590 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:05:24.590 ++ fss["$mount"]=ufs 00:05:24.590 ++ avails["$mount"]=17240838144 00:05:24.590 ++ sizes["$mount"]=31182712832 00:05:24.590 ++ uses["$mount"]=11447259136 00:05:24.591 ++ read -r source fs size use avail _ mount 00:05:24.591 ++ mounts["$mount"]=devfs 00:05:24.591 ++ fss["$mount"]=devfs 00:05:24.591 ++ avails["$mount"]=1024 00:05:24.591 ++ sizes["$mount"]=1024 00:05:24.591 ++ uses["$mount"]=0 00:05:24.591 ++ read -r source fs size use avail _ mount 00:05:24.591 ++ mounts["$mount"]=tmpfs 00:05:24.591 ++ fss["$mount"]=tmpfs 00:05:24.591 ++ avails["$mount"]=2147438592 00:05:24.591 ++ sizes["$mount"]=2147483648 00:05:24.591 ++ uses["$mount"]=45056 00:05:24.591 ++ read -r source fs size use avail _ mount 00:05:24.591 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output 00:05:24.591 ++ fss["$mount"]=fusefs.sshfs 00:05:24.591 ++ avails["$mount"]=92799512576 00:05:24.591 ++ sizes["$mount"]=105088212992 00:05:24.591 ++ uses["$mount"]=6903267328 00:05:24.591 ++ read -r source fs size use avail _ mount 00:05:24.591 ++ printf '* Looking for test storage...\n' 00:05:24.591 * Looking for test storage... 00:05:24.591 ++ local target_space new_size 00:05:24.591 ++ for target_dir in "${storage_candidates[@]}" 00:05:24.591 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.591 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:24.591 ++ mount=/ 00:05:24.591 ++ target_space=17240838144 00:05:24.591 ++ (( target_space == 0 || target_space < requested_size )) 00:05:24.591 ++ (( target_space >= requested_size )) 00:05:24.591 ++ [[ ufs == tmpfs ]] 00:05:24.591 ++ [[ ufs == ramfs ]] 00:05:24.591 ++ [[ / == / ]] 00:05:24.591 ++ new_size=13661851648 00:05:24.591 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:24.591 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:24.591 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:24.591 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.591 ++ return 0 00:05:24.591 ++ set -o errtrace 00:05:24.591 ++ shopt -s extdebug 00:05:24.591 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:24.591 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@1687 -- # true 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@29 -- # exec 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:24.591 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@208 -- # uname -m 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:05:24.591 06:18:37 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.591 06:18:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:24.591 ************************************ 00:05:24.591 START TEST unittest_pci_event 00:05:24.591 ************************************ 00:05:24.591 06:18:37 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:24.591 00:05:24.591 00:05:24.591 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.591 http://cunit.sourceforge.net/ 00:05:24.591 00:05:24.591 00:05:24.591 Suite: pci_event 00:05:24.591 Test: test_pci_parse_event ...passed 00:05:24.591 00:05:24.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.591 suites 1 1 n/a 0 0 00:05:24.591 tests 1 1 1 0 0 00:05:24.591 asserts 1 1 1 0 n/a 00:05:24.591 00:05:24.591 Elapsed time = 0.000 seconds 00:05:24.591 00:05:24.591 real 0m0.027s 00:05:24.591 user 0m0.002s 00:05:24.591 sys 0m0.010s 00:05:24.591 06:18:37 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.591 06:18:37 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:05:24.591 ************************************ 00:05:24.591 END TEST unittest_pci_event 00:05:24.591 ************************************ 00:05:24.852 06:18:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:24.852 06:18:37 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:24.852 06:18:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.852 06:18:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.852 06:18:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:24.852 ************************************ 00:05:24.852 START TEST unittest_include 00:05:24.852 ************************************ 00:05:24.852 06:18:37 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:24.852 00:05:24.852 00:05:24.852 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.852 http://cunit.sourceforge.net/ 00:05:24.852 00:05:24.852 00:05:24.852 Suite: histogram 00:05:24.852 Test: histogram_test ...passed 00:05:24.852 Test: histogram_merge ...passed 00:05:24.852 00:05:24.852 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.852 suites 1 1 n/a 0 0 00:05:24.852 tests 2 2 2 0 0 00:05:24.852 asserts 50 50 50 0 n/a 00:05:24.852 00:05:24.852 Elapsed time = 0.000 seconds 00:05:24.852 00:05:24.852 real 0m0.008s 00:05:24.852 user 0m0.000s 00:05:24.852 sys 0m0.008s 00:05:24.852 06:18:37 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.852 ************************************ 00:05:24.852 END TEST unittest_include 00:05:24.852 ************************************ 00:05:24.852 06:18:37 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:05:24.852 06:18:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:24.852 06:18:37 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:05:24.852 06:18:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.853 06:18:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.853 06:18:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:24.853 ************************************ 00:05:24.853 START TEST unittest_bdev 00:05:24.853 ************************************ 00:05:24.853 06:18:37 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:05:24.853 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:24.853 00:05:24.853 00:05:24.853 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.853 http://cunit.sourceforge.net/ 00:05:24.853 00:05:24.853 00:05:24.853 Suite: bdev 00:05:24.853 Test: bytes_to_blocks_test ...passed 00:05:24.853 Test: num_blocks_test ...passed 00:05:24.853 Test: io_valid_test ...passed 00:05:24.853 Test: open_write_test ...[2024-07-23 06:18:37.177894] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.178075] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.178092] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:24.853 passed 00:05:24.853 Test: claim_test ...passed 00:05:24.853 Test: alias_add_del_test ...[2024-07-23 06:18:37.180445] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:24.853 [2024-07-23 06:18:37.180470] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:24.853 [2024-07-23 06:18:37.180481] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:24.853 passed 00:05:24.853 Test: get_device_stat_test ...passed 00:05:24.853 Test: bdev_io_types_test ...passed 00:05:24.853 Test: bdev_io_wait_test ...passed 00:05:24.853 Test: bdev_io_spans_split_test ...passed 00:05:24.853 Test: bdev_io_boundary_split_test ...passed 00:05:24.853 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-23 06:18:37.186747] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:24.853 passed 00:05:24.853 Test: bdev_io_mix_split_test ...passed 00:05:24.853 Test: bdev_io_split_with_io_wait ...passed 00:05:24.853 Test: bdev_io_write_unit_split_test ...[2024-07-23 06:18:37.191098] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:24.853 [2024-07-23 06:18:37.191150] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:24.853 [2024-07-23 06:18:37.191239] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:24.853 [2024-07-23 06:18:37.191265] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:24.853 passed 00:05:24.853 Test: bdev_io_alignment_with_boundary ...passed 00:05:24.853 Test: bdev_io_alignment ...passed 00:05:24.853 Test: bdev_histograms ...passed 00:05:24.853 Test: bdev_write_zeroes ...passed 00:05:24.853 Test: bdev_compare_and_write ...passed 00:05:24.853 Test: bdev_compare ...passed 00:05:24.853 Test: bdev_compare_emulated ...passed 00:05:24.853 Test: bdev_zcopy_write ...passed 00:05:24.853 Test: bdev_zcopy_read ...passed 00:05:24.853 Test: bdev_open_while_hotremove ...passed 00:05:24.853 Test: bdev_close_while_hotremove ...passed 00:05:24.853 Test: bdev_open_ext_test ...[2024-07-23 06:18:37.207807] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:24.853 passed 00:05:24.853 Test: bdev_open_ext_unregister ...passed 00:05:24.853 Test: bdev_set_io_timeout ...[2024-07-23 06:18:37.207851] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:24.853 passed 00:05:24.853 Test: bdev_set_qd_sampling ...passed 00:05:24.853 Test: lba_range_overlap ...passed 00:05:24.853 Test: lock_lba_range_check_ranges ...passed 00:05:24.853 Test: lock_lba_range_with_io_outstanding ...passed 00:05:24.853 Test: lock_lba_range_overlapped ...passed 00:05:24.853 Test: bdev_quiesce ...[2024-07-23 06:18:37.215401] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:24.853 passed 00:05:24.853 Test: bdev_io_abort ...passed 00:05:24.853 Test: bdev_unmap ...passed 00:05:24.853 Test: bdev_write_zeroes_split_test ...passed 00:05:24.853 Test: bdev_set_options_test ...passed 00:05:24.853 Test: bdev_get_memory_domains ...passed 00:05:24.853 Test: bdev_io_ext ...[2024-07-23 06:18:37.219877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:24.853 passed 00:05:24.853 Test: bdev_io_ext_no_opts ...passed 00:05:24.853 Test: bdev_io_ext_invalid_opts ...passed 00:05:24.853 Test: bdev_io_ext_split ...passed 00:05:24.853 Test: bdev_io_ext_bounce_buffer ...passed 00:05:24.853 Test: bdev_register_uuid_alias ...[2024-07-23 06:18:37.227451] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 65975108-48bb-11ef-a06c-59ddad71024c already exists 00:05:24.853 [2024-07-23 06:18:37.227483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:65975108-48bb-11ef-a06c-59ddad71024c alias for bdev bdev0 00:05:24.853 passed 00:05:24.853 Test: bdev_unregister_by_name ...[2024-07-23 06:18:37.227933] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:24.853 [2024-07-23 06:18:37.227947] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8016:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:24.853 passed 00:05:24.853 Test: for_each_bdev_test ...passed 00:05:24.853 Test: bdev_seek_test ...passed 00:05:24.853 Test: bdev_copy ...passed 00:05:24.853 Test: bdev_copy_split_test ...passed 00:05:24.853 Test: examine_locks ...passed 00:05:24.853 Test: claim_v2_rwo ...passed 00:05:24.853 Test: claim_v2_rom ...[2024-07-23 06:18:37.232164] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232193] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232214] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232223] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232234] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8737:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:24.853 [2024-07-23 06:18:37.232262] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232272] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232281] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232290] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:24.853 passed 00:05:24.853 Test: claim_v2_rwm ...passed 00:05:24.853 Test: claim_v2_existing_writer ...passed 00:05:24.853 Test: claim_v2_existing_v1 ...[2024-07-23 06:18:37.232305] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:24.853 [2024-07-23 06:18:37.232314] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8775:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:24.853 [2024-07-23 06:18:37.232336] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8810:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:24.853 [2024-07-23 06:18:37.232347] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232357] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232366] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232374] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232383] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:24.853 [2024-07-23 06:18:37.232394] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8810:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:24.853 [2024-07-23 06:18:37.232416] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8775:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:24.853 [2024-07-23 06:18:37.232425] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8775:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:24.854 [2024-07-23 06:18:37.232446] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:24.854 passed 00:05:24.854 Test: claim_v1_existing_v2 ...passed 00:05:24.854 Test: examine_claimed ...passed 00:05:24.854 00:05:24.854 [2024-07-23 06:18:37.232455] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:24.854 [2024-07-23 06:18:37.232470] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:24.854 [2024-07-23 06:18:37.232652] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:24.854 [2024-07-23 06:18:37.232666] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:24.854 [2024-07-23 06:18:37.232676] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:24.854 [2024-07-23 06:18:37.232716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:24.854 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.854 suites 1 1 n/a 0 0 00:05:24.854 tests 59 59 59 0 0 00:05:24.854 asserts 4599 4599 4599 0 n/a 00:05:24.854 00:05:24.854 Elapsed time = 0.062 seconds 00:05:24.854 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:24.854 00:05:24.854 00:05:24.854 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.854 http://cunit.sourceforge.net/ 00:05:24.854 00:05:24.854 00:05:24.854 Suite: nvme 00:05:24.854 Test: test_create_ctrlr ...passed 00:05:24.854 Test: test_reset_ctrlr ...[2024-07-23 06:18:37.242148] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:24.854 Test: test_failover_ctrlr ...passed 00:05:24.854 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-23 06:18:37.243144] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.243186] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.243525] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_pending_reset ...[2024-07-23 06:18:37.243914] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.244209] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_attach_ctrlr ...passed 00:05:24.854 Test: test_aer_cb ...[2024-07-23 06:18:37.244274] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:24.854 passed 00:05:24.854 Test: test_submit_nvme_cmd ...passed 00:05:24.854 Test: test_add_remove_trid ...passed 00:05:24.854 Test: test_abort ...[2024-07-23 06:18:37.245141] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:24.854 passed 00:05:24.854 Test: test_get_io_qpair ...passed 00:05:24.854 Test: test_bdev_unregister ...passed 00:05:24.854 Test: test_compare_ns ...passed 00:05:24.854 Test: test_init_ana_log_page ...passed 00:05:24.854 Test: test_get_memory_domains ...passed 00:05:24.854 Test: test_reconnect_qpair ...passed 00:05:24.854 Test: test_create_bdev_ctrlr ...passed 00:05:24.854 Test: test_add_multi_ns_to_bdev ...[2024-07-23 06:18:37.245734] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.245797] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:24.854 passed 00:05:24.854 Test: test_add_multi_io_paths_to_nbdev_ch ...[2024-07-23 06:18:37.245908] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:24.854 passed 00:05:24.854 Test: test_admin_path ...passed 00:05:24.854 Test: test_reset_bdev_ctrlr ...passed 00:05:24.854 Test: test_find_io_path ...passed 00:05:24.854 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:24.854 Test: test_retry_io_for_io_path_error ...passed 00:05:24.854 Test: test_retry_io_count ...passed 00:05:24.854 Test: test_concurrent_read_ana_log_page ...passed 00:05:24.854 Test: test_retry_io_for_ana_error ...passed 00:05:24.854 Test: test_check_io_error_resiliency_params ...[2024-07-23 06:18:37.246675] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:24.854 [2024-07-23 06:18:37.246694] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:24.854 [2024-07-23 06:18:37.246702] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:24.854 passed 00:05:24.854 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-23 06:18:37.246708] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:24.854 [2024-07-23 06:18:37.246715] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:24.854 [2024-07-23 06:18:37.246723] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:24.854 [2024-07-23 06:18:37.246729] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:24.854 [2024-07-23 06:18:37.246736] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:24.854 [2024-07-23 06:18:37.246742] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:24.854 passed 00:05:24.854 Test: test_reconnect_ctrlr ...[2024-07-23 06:18:37.246819] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.246867] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247159] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247188] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247338] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_retry_failover_ctrlr ...passed 00:05:24.854 Test: test_fail_path ...[2024-07-23 06:18:37.247451] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247495] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247635] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_nvme_ns_cmp ...passed 00:05:24.854 Test: test_ana_transition ...passed 00:05:24.854 Test: test_set_preferred_path ...[2024-07-23 06:18:37.247714] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247733] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.247743] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_find_next_io_path ...passed 00:05:24.854 Test: test_find_io_path_min_qd ...passed 00:05:24.854 Test: test_disable_auto_failback ...[2024-07-23 06:18:37.247985] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_set_multipath_policy ...passed 00:05:24.854 Test: test_uuid_generation ...passed 00:05:24.854 Test: test_retry_io_to_same_path ...passed 00:05:24.854 Test: test_race_between_reset_and_disconnected ...passed 00:05:24.854 Test: test_ctrlr_op_rpc ...passed 00:05:24.854 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:24.854 Test: test_disable_enable_ctrlr ...[2024-07-23 06:18:37.281633] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 [2024-07-23 06:18:37.282094] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:24.854 passed 00:05:24.854 Test: test_delete_ctrlr_done ...passed 00:05:24.854 Test: test_ns_remove_during_reset ...passed 00:05:24.854 Test: test_io_path_is_current ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 1 1 n/a 0 0 00:05:24.855 tests 49 49 49 0 0 00:05:24.855 asserts 3578 3578 3578 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.016 seconds 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 Test Options 00:05:24.855 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:24.855 00:05:24.855 Suite: raid 00:05:24.855 Test: test_create_raid ...passed 00:05:24.855 Test: test_create_raid_superblock ...passed 00:05:24.855 Test: test_delete_raid ...passed 00:05:24.855 Test: test_create_raid_invalid_args ...[2024-07-23 06:18:37.293553] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1507:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:24.855 [2024-07-23 06:18:37.293740] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1501:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:24.855 [2024-07-23 06:18:37.293825] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1491:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:24.855 [2024-07-23 06:18:37.293863] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:24.855 [2024-07-23 06:18:37.293875] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:24.855 [2024-07-23 06:18:37.294006] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:24.855 [2024-07-23 06:18:37.294017] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:24.855 passed 00:05:24.855 Test: test_delete_raid_invalid_args ...passed 00:05:24.855 Test: test_io_channel ...passed 00:05:24.855 Test: test_reset_io ...passed 00:05:24.855 Test: test_multi_raid ...passed 00:05:24.855 Test: test_io_type_supported ...passed 00:05:24.855 Test: test_raid_json_dump_info ...passed 00:05:24.855 Test: test_context_size ...passed 00:05:24.855 Test: test_raid_level_conversions ...passed 00:05:24.855 Test: test_raid_io_split ...passed 00:05:24.855 Test: test_raid_process ...passed 00:05:24.855 Test: test_raid_process_with_qos ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 1 1 n/a 0 0 00:05:24.855 tests 15 15 15 0 0 00:05:24.855 asserts 6602 6602 6602 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.008 seconds 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 00:05:24.855 Suite: raid_sb 00:05:24.855 Test: test_raid_bdev_write_superblock ...passed 00:05:24.855 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:24.855 Test: test_raid_bdev_parse_superblock ...[2024-07-23 06:18:37.301986] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:24.855 passed 00:05:24.855 Suite: raid_sb_md 00:05:24.855 Test: test_raid_bdev_write_superblock ...passed 00:05:24.855 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:24.855 Test: test_raid_bdev_parse_superblock ...passed 00:05:24.855 Suite: raid_sb_md_interleaved 00:05:24.855 Test: test_raid_bdev_write_superblock ...[2024-07-23 06:18:37.302493] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:24.855 passed 00:05:24.855 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:24.855 Test: test_raid_bdev_parse_superblock ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 3 3 n/a 0 0 00:05:24.855 tests 9 9 9 0 0 00:05:24.855 asserts 139 139 139 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.000 seconds 00:05:24.855 [2024-07-23 06:18:37.302602] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 00:05:24.855 Suite: concat 00:05:24.855 Test: test_concat_start ...passed 00:05:24.855 Test: test_concat_rw ...passed 00:05:24.855 Test: test_concat_null_payload ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 1 1 n/a 0 0 00:05:24.855 tests 3 3 3 0 0 00:05:24.855 asserts 8460 8460 8460 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.000 seconds 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 00:05:24.855 Suite: raid0 00:05:24.855 Test: test_write_io ...passed 00:05:24.855 Test: test_read_io ...passed 00:05:24.855 Test: test_unmap_io ...passed 00:05:24.855 Test: test_io_failure ...passed 00:05:24.855 Suite: raid0_dif 00:05:24.855 Test: test_write_io ...passed 00:05:24.855 Test: test_read_io ...passed 00:05:24.855 Test: test_unmap_io ...passed 00:05:24.855 Test: test_io_failure ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 2 2 n/a 0 0 00:05:24.855 tests 8 8 8 0 0 00:05:24.855 asserts 368291 368291 368291 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.016 seconds 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 00:05:24.855 Suite: raid1 00:05:24.855 Test: test_raid1_start ...passed 00:05:24.855 Test: test_raid1_read_balancing ...passed 00:05:24.855 Test: test_raid1_write_error ...passed 00:05:24.855 Test: test_raid1_read_error ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 1 1 n/a 0 0 00:05:24.855 tests 4 4 4 0 0 00:05:24.855 asserts 4374 4374 4374 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.000 seconds 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 00:05:24.855 Suite: zone 00:05:24.855 Test: test_zone_get_operation ...passed 00:05:24.855 Test: test_bdev_zone_get_info ...passed 00:05:24.855 Test: test_bdev_zone_management ...passed 00:05:24.855 Test: test_bdev_zone_append ...passed 00:05:24.855 Test: test_bdev_zone_append_with_md ...passed 00:05:24.855 Test: test_bdev_zone_appendv ...passed 00:05:24.855 Test: test_bdev_zone_appendv_with_md ...passed 00:05:24.855 Test: test_bdev_io_get_append_location ...passed 00:05:24.855 00:05:24.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.855 suites 1 1 n/a 0 0 00:05:24.855 tests 8 8 8 0 0 00:05:24.855 asserts 94 94 94 0 n/a 00:05:24.855 00:05:24.855 Elapsed time = 0.000 seconds 00:05:24.855 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:24.855 00:05:24.855 00:05:24.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.855 http://cunit.sourceforge.net/ 00:05:24.855 00:05:24.855 00:05:24.855 Suite: gpt_parse 00:05:24.855 Test: test_parse_mbr_and_primary ...[2024-07-23 06:18:37.344102] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:24.856 [2024-07-23 06:18:37.344379] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:24.856 [2024-07-23 06:18:37.344436] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:24.856 [2024-07-23 06:18:37.344454] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:24.856 [2024-07-23 06:18:37.344473] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:24.856 [2024-07-23 06:18:37.344488] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:24.856 passed 00:05:24.856 Test: test_parse_secondary ...[2024-07-23 06:18:37.344752] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:24.856 [2024-07-23 06:18:37.344775] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:24.856 [2024-07-23 06:18:37.344824] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:24.856 [2024-07-23 06:18:37.344852] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:24.856 passed 00:05:24.856 Test: test_check_mbr ...[2024-07-23 06:18:37.345103] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:24.856 passed 00:05:24.856 Test: test_read_header ...passed[2024-07-23 06:18:37.345134] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:24.856 [2024-07-23 06:18:37.345165] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:24.856 [2024-07-23 06:18:37.345182] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:24.856 [2024-07-23 06:18:37.345197] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:24.856 [2024-07-23 06:18:37.345214] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:24.856 [2024-07-23 06:18:37.345230] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:24.856 [2024-07-23 06:18:37.345245] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:24.856 00:05:24.856 Test: test_read_partitions ...[2024-07-23 06:18:37.345268] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:24.856 [2024-07-23 06:18:37.345284] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:24.856 [2024-07-23 06:18:37.345298] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:24.856 [2024-07-23 06:18:37.345312] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:24.856 [2024-07-23 06:18:37.345429] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:24.856 passed 00:05:24.856 00:05:24.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.856 suites 1 1 n/a 0 0 00:05:24.856 tests 5 5 5 0 0 00:05:24.856 asserts 33 33 33 0 n/a 00:05:24.856 00:05:24.856 Elapsed time = 0.000 seconds 00:05:24.856 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:24.856 00:05:24.856 00:05:24.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.856 http://cunit.sourceforge.net/ 00:05:24.856 00:05:24.856 00:05:24.856 Suite: bdev_part 00:05:24.856 Test: part_test ...[2024-07-23 06:18:37.351987] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name b47cbc3f-a9ef-745d-9b2c-9600e8ddc89e already exists 00:05:24.856 [2024-07-23 06:18:37.352178] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:b47cbc3f-a9ef-745d-9b2c-9600e8ddc89e alias for bdev test1 00:05:24.856 passed 00:05:24.856 Test: part_free_test ...passed 00:05:24.856 Test: part_get_io_channel_test ...passed 00:05:24.856 Test: part_construct_ext ...passed 00:05:24.856 00:05:24.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.856 suites 1 1 n/a 0 0 00:05:24.856 tests 4 4 4 0 0 00:05:24.856 asserts 48 48 48 0 n/a 00:05:24.856 00:05:24.856 Elapsed time = 0.008 seconds 00:05:24.856 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:24.856 00:05:24.856 00:05:24.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.856 http://cunit.sourceforge.net/ 00:05:24.856 00:05:24.856 00:05:24.856 Suite: scsi_nvme_suite 00:05:24.856 Test: scsi_nvme_translate_test ...passed 00:05:24.856 00:05:24.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.856 suites 1 1 n/a 0 0 00:05:24.856 tests 1 1 1 0 0 00:05:24.856 asserts 104 104 104 0 n/a 00:05:24.856 00:05:24.856 Elapsed time = 0.000 seconds 00:05:24.856 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:24.856 00:05:24.856 00:05:24.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.856 http://cunit.sourceforge.net/ 00:05:24.856 00:05:24.856 00:05:24.856 Suite: lvol 00:05:24.856 Test: ut_lvs_init ...passed 00:05:24.856 Test: ut_lvol_init ...passed 00:05:24.856 Test: ut_lvol_snapshot ...[2024-07-23 06:18:37.363349] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:24.856 [2024-07-23 06:18:37.363516] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:24.856 [2024-07-23 06:18:37.363598] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:24.856 passed 00:05:24.856 Test: ut_lvol_clone ...passed 00:05:24.856 Test: ut_lvs_destroy ...passed 00:05:24.856 Test: ut_lvs_unload ...passed 00:05:24.856 Test: ut_lvol_resize ...passed 00:05:24.856 Test: ut_lvol_set_read_only ...passed 00:05:24.856 Test: ut_lvol_hotremove ...passed 00:05:24.856 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:24.856 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:24.856 Test: ut_lvol_read_write ...passed 00:05:24.856 Test: ut_vbdev_lvol_submit_request ...passed 00:05:24.856 Test: ut_lvol_examine_config ...passed 00:05:24.856 Test: ut_lvol_examine_disk ...[2024-07-23 06:18:37.363663] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:24.856 passed 00:05:24.856 Test: ut_lvol_rename ...passed 00:05:24.856 Test: ut_bdev_finish ...passed 00:05:24.856 Test: ut_lvs_rename ...passed 00:05:24.856 Test: ut_lvol_seek ...passed 00:05:24.856 Test: ut_esnap_dev_create ...passed 00:05:24.856 Test: ut_lvol_esnap_clone_bad_args ...passed 00:05:24.856 Test: ut_lvol_shallow_copy ...passed 00:05:24.856 Test: ut_lvol_set_external_parent ...passed 00:05:24.856 00:05:24.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.856 suites 1 1 n/a 0 0 00:05:24.856 tests 23 23 23 0 0 00:05:24.856 asserts 770 770 770 0 n/a 00:05:24.856 00:05:24.856 Elapsed time = 0.000 seconds 00:05:24.856 [2024-07-23 06:18:37.363700] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:24.856 [2024-07-23 06:18:37.363710] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:24.856 [2024-07-23 06:18:37.363743] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:24.856 [2024-07-23 06:18:37.363753] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:24.856 [2024-07-23 06:18:37.363762] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:24.856 [2024-07-23 06:18:37.363787] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:24.856 [2024-07-23 06:18:37.363798] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:05:24.856 [2024-07-23 06:18:37.363820] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:24.856 [2024-07-23 06:18:37.363828] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:05:24.856 [2024-07-23 06:18:37.363842] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:05:24.856 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:25.116 00:05:25.116 00:05:25.116 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.116 http://cunit.sourceforge.net/ 00:05:25.116 00:05:25.116 00:05:25.116 Suite: zone_block 00:05:25.116 Test: test_zone_block_create ...passed 00:05:25.116 Test: test_zone_block_create_invalid ...[2024-07-23 06:18:37.373341] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:25.116 passed 00:05:25.116 Test: test_get_zone_info ...passed 00:05:25.116 Test: test_supported_io_types ...passed 00:05:25.116 Test: test_reset_zone ...passed 00:05:25.116 Test: test_open_zone ...passed 00:05:25.116 Test: test_zone_write ...[2024-07-23 06:18:37.373492] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-23 06:18:37.373511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:25.116 [2024-07-23 06:18:37.373521] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-23 06:18:37.373532] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:25.116 [2024-07-23 06:18:37.373541] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-23 06:18:37.373550] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:25.116 [2024-07-23 06:18:37.373558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-23 06:18:37.373619] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373636] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373698] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373710] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373742] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373973] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.116 [2024-07-23 06:18:37.373984] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.374019] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:25.117 [2024-07-23 06:18:37.374028] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.374040] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:25.117 [2024-07-23 06:18:37.374048] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.374571] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:25.117 [2024-07-23 06:18:37.374581] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.374592] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:25.117 [2024-07-23 06:18:37.374600] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 passed 00:05:25.117 Test: test_zone_read ...[2024-07-23 06:18:37.375196] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:25.117 [2024-07-23 06:18:37.375205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375236] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:25.117 [2024-07-23 06:18:37.375246] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375258] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:25.117 [2024-07-23 06:18:37.375266] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375311] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:25.117 [2024-07-23 06:18:37.375319] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 passed 00:05:25.117 Test: test_close_zone ...passed 00:05:25.117 Test: test_finish_zone ...passed 00:05:25.117 Test: test_append_zone ...[2024-07-23 06:18:37.375354] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375371] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375411] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375421] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375483] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375495] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375524] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:25.117 [2024-07-23 06:18:37.375533] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 [2024-07-23 06:18:37.375544] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:25.117 [2024-07-23 06:18:37.375552] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 passed 00:05:25.117 00:05:25.117 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.117 suites 1 1 n/a 0 0 00:05:25.117 tests 11 11 11 0 0 00:05:25.117 asserts 3437 3437 3437 0 n/a 00:05:25.117 00:05:25.117 Elapsed time = 0.008 seconds 00:05:25.117 [2024-07-23 06:18:37.376715] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:25.117 [2024-07-23 06:18:37.376726] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:25.117 06:18:37 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:25.117 00:05:25.117 00:05:25.117 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.117 http://cunit.sourceforge.net/ 00:05:25.117 00:05:25.117 00:05:25.117 Suite: bdev 00:05:25.117 Test: basic ...[2024-07-23 06:18:37.386537] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b639): Operation not permitted (rc=-1) 00:05:25.117 [2024-07-23 06:18:37.386796] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x106a2fa6a480 (0x24b630): Operation not permitted (rc=-1) 00:05:25.117 [2024-07-23 06:18:37.386819] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b639): Operation not permitted (rc=-1) 00:05:25.117 passed 00:05:25.117 Test: unregister_and_close ...passed 00:05:25.117 Test: unregister_and_close_different_threads ...passed 00:05:25.117 Test: basic_qos ...passed 00:05:25.117 Test: put_channel_during_reset ...passed 00:05:25.117 Test: aborted_reset ...passed 00:05:25.117 Test: aborted_reset_no_outstanding_io ...passed 00:05:25.117 Test: io_during_reset ...passed 00:05:25.117 Test: reset_completions ...passed 00:05:25.117 Test: io_during_qos_queue ...passed 00:05:25.117 Test: io_during_qos_reset ...passed 00:05:25.117 Test: enomem ...passed 00:05:25.117 Test: enomem_multi_bdev ...passed 00:05:25.117 Test: enomem_multi_bdev_unregister ...passed 00:05:25.117 Test: enomem_multi_io_target ...passed 00:05:25.117 Test: qos_dynamic_enable ...passed 00:05:25.117 Test: bdev_histograms_mt ...passed 00:05:25.117 Test: bdev_set_io_timeout_mt ...[2024-07-23 06:18:37.420484] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x106a2fa6a600 not unregistered 00:05:25.117 passed 00:05:25.117 Test: lock_lba_range_then_submit_io ...[2024-07-23 06:18:37.421538] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b618 already registered (old:0x106a2fa6a600 new:0x106a2fa6a780) 00:05:25.117 passed 00:05:25.117 Test: unregister_during_reset ...passed 00:05:25.117 Test: event_notify_and_close ...passed 00:05:25.117 Test: unregister_and_qos_poller ...passed 00:05:25.117 Suite: bdev_wrong_thread 00:05:25.117 Test: spdk_bdev_register_wt ...[2024-07-23 06:18:37.427408] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8536:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x106a2fa33380 (0x106a2fa33380) 00:05:25.117 passed 00:05:25.117 Test: spdk_bdev_examine_wt ...passed[2024-07-23 06:18:37.427464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x106a2fa33380 (0x106a2fa33380) 00:05:25.117 00:05:25.117 00:05:25.117 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.117 suites 2 2 n/a 0 0 00:05:25.117 tests 24 24 24 0 0 00:05:25.117 asserts 621 621 621 0 n/a 00:05:25.117 00:05:25.117 Elapsed time = 0.047 seconds 00:05:25.117 00:05:25.117 real 0m0.261s 00:05:25.117 user 0m0.182s 00:05:25.117 sys 0m0.076s 00:05:25.117 06:18:37 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.117 06:18:37 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:25.117 ************************************ 00:05:25.117 END TEST unittest_bdev 00:05:25.117 ************************************ 00:05:25.117 06:18:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:25.117 06:18:37 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:25.117 06:18:37 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:25.117 06:18:37 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:25.117 06:18:37 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:25.117 06:18:37 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:05:25.117 06:18:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.117 06:18:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.117 06:18:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:25.117 ************************************ 00:05:25.117 START TEST unittest_blob_blobfs 00:05:25.117 ************************************ 00:05:25.117 06:18:37 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:05:25.117 06:18:37 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:25.118 06:18:37 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:25.118 00:05:25.118 00:05:25.118 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.118 http://cunit.sourceforge.net/ 00:05:25.118 00:05:25.118 00:05:25.118 Suite: blob_nocopy_noextent 00:05:25.118 Test: blob_init ...[2024-07-23 06:18:37.489497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:25.118 passed 00:05:25.118 Test: blob_thin_provision ...passed 00:05:25.118 Test: blob_read_only ...passed 00:05:25.118 Test: bs_load ...[2024-07-23 06:18:37.569644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:25.118 passed 00:05:25.118 Test: bs_load_custom_cluster_size ...passed 00:05:25.118 Test: bs_load_after_failed_grow ...passed 00:05:25.118 Test: bs_cluster_sz ...[2024-07-23 06:18:37.592403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:25.118 [2024-07-23 06:18:37.592501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:25.118 [2024-07-23 06:18:37.592519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:25.118 passed 00:05:25.118 Test: bs_resize_md ...passed 00:05:25.118 Test: bs_destroy ...passed 00:05:25.377 Test: bs_type ...passed 00:05:25.377 Test: bs_super_block ...passed 00:05:25.377 Test: bs_test_recover_cluster_count ...passed 00:05:25.377 Test: bs_grow_live ...passed 00:05:25.377 Test: bs_grow_live_no_space ...passed 00:05:25.377 Test: bs_test_grow ...passed 00:05:25.377 Test: blob_serialize_test ...passed 00:05:25.377 Test: super_block_crc ...passed 00:05:25.377 Test: blob_thin_prov_write_count_io ...passed 00:05:25.377 Test: blob_thin_prov_unmap_cluster ...passed 00:05:25.377 Test: bs_load_iter_test ...passed 00:05:25.377 Test: blob_relations ...[2024-07-23 06:18:37.766398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:25.377 [2024-07-23 06:18:37.766467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.377 [2024-07-23 06:18:37.766594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:25.377 [2024-07-23 06:18:37.766608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.377 passed 00:05:25.377 Test: blob_relations2 ...[2024-07-23 06:18:37.779416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:25.377 [2024-07-23 06:18:37.779472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.377 [2024-07-23 06:18:37.779483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:25.377 [2024-07-23 06:18:37.779490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.377 [2024-07-23 06:18:37.779641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:25.377 [2024-07-23 06:18:37.779653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.377 [2024-07-23 06:18:37.779689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:25.377 [2024-07-23 06:18:37.779698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.377 passed 00:05:25.377 Test: blob_relations3 ...passed 00:05:25.636 Test: blobstore_clean_power_failure ...passed 00:05:25.636 Test: blob_delete_snapshot_power_failure ...[2024-07-23 06:18:37.948083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:25.636 [2024-07-23 06:18:37.959959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:25.636 [2024-07-23 06:18:37.960022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:25.636 [2024-07-23 06:18:37.960032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.636 [2024-07-23 06:18:37.971090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:25.636 [2024-07-23 06:18:37.971139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:25.636 [2024-07-23 06:18:37.971147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:25.636 [2024-07-23 06:18:37.971155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.636 [2024-07-23 06:18:37.982234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:25.636 [2024-07-23 06:18:37.982268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.636 [2024-07-23 06:18:37.993795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:25.636 [2024-07-23 06:18:37.993864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.636 [2024-07-23 06:18:38.005776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:25.636 [2024-07-23 06:18:38.005850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.636 passed 00:05:25.636 Test: blob_create_snapshot_power_failure ...[2024-07-23 06:18:38.042495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:25.636 [2024-07-23 06:18:38.066901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:25.636 [2024-07-23 06:18:38.078851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:25.636 passed 00:05:25.636 Test: blob_io_unit ...passed 00:05:25.636 Test: blob_io_unit_compatibility ...passed 00:05:25.894 Test: blob_ext_md_pages ...passed 00:05:25.894 Test: blob_esnap_io_4096_4096 ...passed 00:05:25.894 Test: blob_esnap_io_512_512 ...passed 00:05:25.894 Test: blob_esnap_io_4096_512 ...passed 00:05:25.894 Test: blob_esnap_io_512_4096 ...passed 00:05:25.894 Test: blob_esnap_clone_resize ...passed 00:05:25.894 Suite: blob_bs_nocopy_noextent 00:05:25.894 Test: blob_open ...passed 00:05:25.894 Test: blob_create ...[2024-07-23 06:18:38.345074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:25.894 passed 00:05:25.894 Test: blob_create_loop ...passed 00:05:26.152 Test: blob_create_fail ...[2024-07-23 06:18:38.424361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:26.152 passed 00:05:26.152 Test: blob_create_internal ...passed 00:05:26.152 Test: blob_create_zero_extent ...passed 00:05:26.152 Test: blob_snapshot ...passed 00:05:26.152 Test: blob_clone ...passed 00:05:26.152 Test: blob_inflate ...[2024-07-23 06:18:38.590815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:26.152 passed 00:05:26.152 Test: blob_delete ...passed 00:05:26.152 Test: blob_resize_test ...[2024-07-23 06:18:38.661007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:26.410 passed 00:05:26.410 Test: blob_resize_thin_test ...passed 00:05:26.410 Test: channel_ops ...passed 00:05:26.410 Test: blob_super ...passed 00:05:26.410 Test: blob_rw_verify_iov ...passed 00:05:26.410 Test: blob_unmap ...passed 00:05:26.410 Test: blob_iter ...passed 00:05:26.410 Test: blob_parse_md ...passed 00:05:26.669 Test: bs_load_pending_removal ...passed 00:05:26.669 Test: bs_unload ...[2024-07-23 06:18:38.982660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:26.669 passed 00:05:26.669 Test: bs_usable_clusters ...passed 00:05:26.669 Test: blob_crc ...[2024-07-23 06:18:39.049839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:26.669 [2024-07-23 06:18:39.049902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:26.669 passed 00:05:26.669 Test: blob_flags ...passed 00:05:26.669 Test: bs_version ...passed 00:05:26.669 Test: blob_set_xattrs_test ...[2024-07-23 06:18:39.144423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:26.669 [2024-07-23 06:18:39.144495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:26.669 passed 00:05:26.928 Test: blob_thin_prov_alloc ...passed 00:05:26.928 Test: blob_insert_cluster_msg_test ...passed 00:05:26.928 Test: blob_thin_prov_rw ...passed 00:05:26.928 Test: blob_thin_prov_rle ...passed 00:05:26.928 Test: blob_thin_prov_rw_iov ...passed 00:05:26.928 Test: blob_snapshot_rw ...passed 00:05:26.928 Test: blob_snapshot_rw_iov ...passed 00:05:27.186 Test: blob_inflate_rw ...passed 00:05:27.186 Test: blob_snapshot_freeze_io ...passed 00:05:27.186 Test: blob_operation_split_rw ...passed 00:05:27.186 Test: blob_operation_split_rw_iov ...passed 00:05:27.186 Test: blob_simultaneous_operations ...[2024-07-23 06:18:39.668001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:27.186 [2024-07-23 06:18:39.668076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:27.186 [2024-07-23 06:18:39.668399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:27.186 [2024-07-23 06:18:39.668410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:27.186 [2024-07-23 06:18:39.671711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:27.186 [2024-07-23 06:18:39.671728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:27.186 [2024-07-23 06:18:39.671745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:27.186 [2024-07-23 06:18:39.671752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:27.186 passed 00:05:27.445 Test: blob_persist_test ...passed 00:05:27.445 Test: blob_decouple_snapshot ...passed 00:05:27.445 Test: blob_seek_io_unit ...passed 00:05:27.445 Test: blob_nested_freezes ...passed 00:05:27.445 Test: blob_clone_resize ...passed 00:05:27.445 Test: blob_shallow_copy ...[2024-07-23 06:18:39.901083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:27.445 [2024-07-23 06:18:39.901195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:27.445 [2024-07-23 06:18:39.901206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:27.445 passed 00:05:27.445 Suite: blob_blob_nocopy_noextent 00:05:27.445 Test: blob_write ...passed 00:05:27.703 Test: blob_read ...passed 00:05:27.703 Test: blob_rw_verify ...passed 00:05:27.703 Test: blob_rw_verify_iov_nomem ...passed 00:05:27.703 Test: blob_rw_iov_read_only ...passed 00:05:27.703 Test: blob_xattr ...passed 00:05:27.703 Test: blob_dirty_shutdown ...passed 00:05:27.703 Test: blob_is_degraded ...passed 00:05:27.703 Suite: blob_esnap_bs_nocopy_noextent 00:05:27.703 Test: blob_esnap_create ...passed 00:05:27.961 Test: blob_esnap_thread_add_remove ...passed 00:05:27.961 Test: blob_esnap_clone_snapshot ...passed 00:05:27.961 Test: blob_esnap_clone_inflate ...passed 00:05:27.961 Test: blob_esnap_clone_decouple ...passed 00:05:27.961 Test: blob_esnap_clone_reload ...passed 00:05:27.961 Test: blob_esnap_hotplug ...passed 00:05:27.961 Test: blob_set_parent ...[2024-07-23 06:18:40.445498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:27.961 [2024-07-23 06:18:40.445555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:27.961 [2024-07-23 06:18:40.445577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:27.961 [2024-07-23 06:18:40.445587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:27.961 [2024-07-23 06:18:40.445638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:27.961 passed 00:05:28.220 Test: blob_set_external_parent ...[2024-07-23 06:18:40.478105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:28.220 [2024-07-23 06:18:40.478165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:28.220 [2024-07-23 06:18:40.478174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:28.220 [2024-07-23 06:18:40.478220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:28.220 passed 00:05:28.220 Suite: blob_nocopy_extent 00:05:28.220 Test: blob_init ...[2024-07-23 06:18:40.489097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:28.220 passed 00:05:28.220 Test: blob_thin_provision ...passed 00:05:28.220 Test: blob_read_only ...passed 00:05:28.220 Test: bs_load ...[2024-07-23 06:18:40.533538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:28.220 passed 00:05:28.220 Test: bs_load_custom_cluster_size ...passed 00:05:28.220 Test: bs_load_after_failed_grow ...passed 00:05:28.220 Test: bs_cluster_sz ...[2024-07-23 06:18:40.556022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:28.220 [2024-07-23 06:18:40.556112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:28.220 [2024-07-23 06:18:40.556125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:28.220 passed 00:05:28.220 Test: bs_resize_md ...passed 00:05:28.220 Test: bs_destroy ...passed 00:05:28.220 Test: bs_type ...passed 00:05:28.220 Test: bs_super_block ...passed 00:05:28.220 Test: bs_test_recover_cluster_count ...passed 00:05:28.220 Test: bs_grow_live ...passed 00:05:28.220 Test: bs_grow_live_no_space ...passed 00:05:28.220 Test: bs_test_grow ...passed 00:05:28.220 Test: blob_serialize_test ...passed 00:05:28.220 Test: super_block_crc ...passed 00:05:28.220 Test: blob_thin_prov_write_count_io ...passed 00:05:28.220 Test: blob_thin_prov_unmap_cluster ...passed 00:05:28.220 Test: bs_load_iter_test ...passed 00:05:28.220 Test: blob_relations ...[2024-07-23 06:18:40.711531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:28.220 [2024-07-23 06:18:40.711615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.220 [2024-07-23 06:18:40.711740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:28.220 [2024-07-23 06:18:40.711751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.220 passed 00:05:28.220 Test: blob_relations2 ...[2024-07-23 06:18:40.722538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:28.220 [2024-07-23 06:18:40.722562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.220 [2024-07-23 06:18:40.722571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:28.220 [2024-07-23 06:18:40.722577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.220 [2024-07-23 06:18:40.722703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:28.220 [2024-07-23 06:18:40.722715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.220 [2024-07-23 06:18:40.722761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:28.220 [2024-07-23 06:18:40.722770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.220 passed 00:05:28.220 Test: blob_relations3 ...passed 00:05:28.479 Test: blobstore_clean_power_failure ...passed 00:05:28.479 Test: blob_delete_snapshot_power_failure ...[2024-07-23 06:18:40.875956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:28.479 [2024-07-23 06:18:40.887618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:28.479 [2024-07-23 06:18:40.898931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:28.479 [2024-07-23 06:18:40.898972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:28.479 [2024-07-23 06:18:40.898980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.479 [2024-07-23 06:18:40.909578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:28.479 [2024-07-23 06:18:40.909611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:28.479 [2024-07-23 06:18:40.909619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:28.479 [2024-07-23 06:18:40.909626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.479 [2024-07-23 06:18:40.919946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:28.479 [2024-07-23 06:18:40.919977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:28.479 [2024-07-23 06:18:40.919985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:28.479 [2024-07-23 06:18:40.919993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.479 [2024-07-23 06:18:40.930277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:28.479 [2024-07-23 06:18:40.930308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.479 [2024-07-23 06:18:40.940765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:28.479 [2024-07-23 06:18:40.940799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.479 [2024-07-23 06:18:40.951340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:28.479 [2024-07-23 06:18:40.951387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.479 passed 00:05:28.479 Test: blob_create_snapshot_power_failure ...[2024-07-23 06:18:40.983434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:28.479 [2024-07-23 06:18:40.994220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:28.738 [2024-07-23 06:18:41.015228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:28.738 [2024-07-23 06:18:41.025922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:28.738 passed 00:05:28.738 Test: blob_io_unit ...passed 00:05:28.738 Test: blob_io_unit_compatibility ...passed 00:05:28.738 Test: blob_ext_md_pages ...passed 00:05:28.738 Test: blob_esnap_io_4096_4096 ...passed 00:05:28.738 Test: blob_esnap_io_512_512 ...passed 00:05:28.738 Test: blob_esnap_io_4096_512 ...passed 00:05:28.738 Test: blob_esnap_io_512_4096 ...passed 00:05:28.738 Test: blob_esnap_clone_resize ...passed 00:05:28.738 Suite: blob_bs_nocopy_extent 00:05:28.738 Test: blob_open ...passed 00:05:28.996 Test: blob_create ...[2024-07-23 06:18:41.272452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:28.996 passed 00:05:28.996 Test: blob_create_loop ...passed 00:05:28.997 Test: blob_create_fail ...[2024-07-23 06:18:41.356436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:28.997 passed 00:05:28.997 Test: blob_create_internal ...passed 00:05:28.997 Test: blob_create_zero_extent ...passed 00:05:28.997 Test: blob_snapshot ...passed 00:05:28.997 Test: blob_clone ...passed 00:05:29.256 Test: blob_inflate ...[2024-07-23 06:18:41.534272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:29.256 passed 00:05:29.256 Test: blob_delete ...passed 00:05:29.256 Test: blob_resize_test ...[2024-07-23 06:18:41.599537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:29.256 passed 00:05:29.256 Test: blob_resize_thin_test ...passed 00:05:29.256 Test: channel_ops ...passed 00:05:29.256 Test: blob_super ...passed 00:05:29.256 Test: blob_rw_verify_iov ...passed 00:05:29.515 Test: blob_unmap ...passed 00:05:29.515 Test: blob_iter ...passed 00:05:29.515 Test: blob_parse_md ...passed 00:05:29.515 Test: bs_load_pending_removal ...passed 00:05:29.515 Test: bs_unload ...[2024-07-23 06:18:41.900396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:29.515 passed 00:05:29.515 Test: bs_usable_clusters ...passed 00:05:29.515 Test: blob_crc ...[2024-07-23 06:18:41.962416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:29.515 [2024-07-23 06:18:41.962480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:29.515 passed 00:05:29.515 Test: blob_flags ...passed 00:05:29.774 Test: bs_version ...passed 00:05:29.774 Test: blob_set_xattrs_test ...[2024-07-23 06:18:42.062568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:29.774 [2024-07-23 06:18:42.062632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:29.774 passed 00:05:29.774 Test: blob_thin_prov_alloc ...passed 00:05:29.774 Test: blob_insert_cluster_msg_test ...passed 00:05:29.774 Test: blob_thin_prov_rw ...passed 00:05:29.774 Test: blob_thin_prov_rle ...passed 00:05:29.774 Test: blob_thin_prov_rw_iov ...passed 00:05:30.033 Test: blob_snapshot_rw ...passed 00:05:30.033 Test: blob_snapshot_rw_iov ...passed 00:05:30.033 Test: blob_inflate_rw ...passed 00:05:30.033 Test: blob_snapshot_freeze_io ...passed 00:05:30.033 Test: blob_operation_split_rw ...passed 00:05:30.291 Test: blob_operation_split_rw_iov ...passed 00:05:30.291 Test: blob_simultaneous_operations ...[2024-07-23 06:18:42.591870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:30.291 [2024-07-23 06:18:42.591957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:30.291 [2024-07-23 06:18:42.592261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:30.291 [2024-07-23 06:18:42.592273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:30.291 [2024-07-23 06:18:42.595885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:30.291 [2024-07-23 06:18:42.595914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:30.291 [2024-07-23 06:18:42.595933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:30.291 [2024-07-23 06:18:42.595941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:30.291 passed 00:05:30.291 Test: blob_persist_test ...passed 00:05:30.291 Test: blob_decouple_snapshot ...passed 00:05:30.291 Test: blob_seek_io_unit ...passed 00:05:30.291 Test: blob_nested_freezes ...passed 00:05:30.551 Test: blob_clone_resize ...passed 00:05:30.551 Test: blob_shallow_copy ...[2024-07-23 06:18:42.835320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:30.551 [2024-07-23 06:18:42.835405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:30.551 [2024-07-23 06:18:42.835417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:30.551 passed 00:05:30.551 Suite: blob_blob_nocopy_extent 00:05:30.551 Test: blob_write ...passed 00:05:30.551 Test: blob_read ...passed 00:05:30.551 Test: blob_rw_verify ...passed 00:05:30.551 Test: blob_rw_verify_iov_nomem ...passed 00:05:30.551 Test: blob_rw_iov_read_only ...passed 00:05:30.854 Test: blob_xattr ...passed 00:05:30.854 Test: blob_dirty_shutdown ...passed 00:05:30.854 Test: blob_is_degraded ...passed 00:05:30.854 Suite: blob_esnap_bs_nocopy_extent 00:05:30.854 Test: blob_esnap_create ...passed 00:05:30.854 Test: blob_esnap_thread_add_remove ...passed 00:05:30.854 Test: blob_esnap_clone_snapshot ...passed 00:05:30.854 Test: blob_esnap_clone_inflate ...passed 00:05:30.854 Test: blob_esnap_clone_decouple ...passed 00:05:30.854 Test: blob_esnap_clone_reload ...passed 00:05:31.113 Test: blob_esnap_hotplug ...passed 00:05:31.113 Test: blob_set_parent ...[2024-07-23 06:18:43.417502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:31.113 [2024-07-23 06:18:43.417562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:31.113 [2024-07-23 06:18:43.417586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:31.113 [2024-07-23 06:18:43.417597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:31.113 [2024-07-23 06:18:43.417656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:31.113 passed 00:05:31.113 Test: blob_set_external_parent ...[2024-07-23 06:18:43.451284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:31.113 [2024-07-23 06:18:43.451375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:31.113 [2024-07-23 06:18:43.451384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:31.113 [2024-07-23 06:18:43.451432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:31.113 passed 00:05:31.113 Suite: blob_copy_noextent 00:05:31.113 Test: blob_init ...[2024-07-23 06:18:43.462443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:31.113 passed 00:05:31.113 Test: blob_thin_provision ...passed 00:05:31.113 Test: blob_read_only ...passed 00:05:31.113 Test: bs_load ...[2024-07-23 06:18:43.509674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:31.113 passed 00:05:31.113 Test: bs_load_custom_cluster_size ...passed 00:05:31.113 Test: bs_load_after_failed_grow ...passed 00:05:31.113 Test: bs_cluster_sz ...[2024-07-23 06:18:43.532764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:31.113 [2024-07-23 06:18:43.532834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:31.113 [2024-07-23 06:18:43.532851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:31.113 passed 00:05:31.113 Test: bs_resize_md ...passed 00:05:31.113 Test: bs_destroy ...passed 00:05:31.113 Test: bs_type ...passed 00:05:31.113 Test: bs_super_block ...passed 00:05:31.113 Test: bs_test_recover_cluster_count ...passed 00:05:31.113 Test: bs_grow_live ...passed 00:05:31.113 Test: bs_grow_live_no_space ...passed 00:05:31.113 Test: bs_test_grow ...passed 00:05:31.113 Test: blob_serialize_test ...passed 00:05:31.372 Test: super_block_crc ...passed 00:05:31.372 Test: blob_thin_prov_write_count_io ...passed 00:05:31.372 Test: blob_thin_prov_unmap_cluster ...passed 00:05:31.372 Test: bs_load_iter_test ...passed 00:05:31.372 Test: blob_relations ...[2024-07-23 06:18:43.692977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:31.372 [2024-07-23 06:18:43.693050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 [2024-07-23 06:18:43.693161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:31.372 [2024-07-23 06:18:43.693173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 passed 00:05:31.372 Test: blob_relations2 ...[2024-07-23 06:18:43.704861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:31.372 [2024-07-23 06:18:43.704905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 [2024-07-23 06:18:43.704914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:31.372 [2024-07-23 06:18:43.704921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 [2024-07-23 06:18:43.705044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:31.372 [2024-07-23 06:18:43.705055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 [2024-07-23 06:18:43.705089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:31.372 [2024-07-23 06:18:43.705097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 passed 00:05:31.372 Test: blob_relations3 ...passed 00:05:31.372 Test: blobstore_clean_power_failure ...passed 00:05:31.372 Test: blob_delete_snapshot_power_failure ...[2024-07-23 06:18:43.862300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:31.372 [2024-07-23 06:18:43.873370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:31.372 [2024-07-23 06:18:43.873409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:31.372 [2024-07-23 06:18:43.873418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.372 [2024-07-23 06:18:43.884577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:31.372 [2024-07-23 06:18:43.884609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:31.372 [2024-07-23 06:18:43.884618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:31.372 [2024-07-23 06:18:43.884633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.631 [2024-07-23 06:18:43.895890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:31.631 [2024-07-23 06:18:43.895925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.631 [2024-07-23 06:18:43.907321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:31.631 [2024-07-23 06:18:43.907389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.631 [2024-07-23 06:18:43.919231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:31.631 [2024-07-23 06:18:43.919283] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.631 passed 00:05:31.631 Test: blob_create_snapshot_power_failure ...[2024-07-23 06:18:43.954244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:31.631 [2024-07-23 06:18:43.976459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:31.631 [2024-07-23 06:18:43.987634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:31.631 passed 00:05:31.631 Test: blob_io_unit ...passed 00:05:31.631 Test: blob_io_unit_compatibility ...passed 00:05:31.631 Test: blob_ext_md_pages ...passed 00:05:31.631 Test: blob_esnap_io_4096_4096 ...passed 00:05:31.631 Test: blob_esnap_io_512_512 ...passed 00:05:31.631 Test: blob_esnap_io_4096_512 ...passed 00:05:31.890 Test: blob_esnap_io_512_4096 ...passed 00:05:31.890 Test: blob_esnap_clone_resize ...passed 00:05:31.890 Suite: blob_bs_copy_noextent 00:05:31.890 Test: blob_open ...passed 00:05:31.890 Test: blob_create ...[2024-07-23 06:18:44.234598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:31.890 passed 00:05:31.890 Test: blob_create_loop ...passed 00:05:31.890 Test: blob_create_fail ...[2024-07-23 06:18:44.318904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:31.890 passed 00:05:31.890 Test: blob_create_internal ...passed 00:05:31.890 Test: blob_create_zero_extent ...passed 00:05:32.149 Test: blob_snapshot ...passed 00:05:32.149 Test: blob_clone ...passed 00:05:32.149 Test: blob_inflate ...[2024-07-23 06:18:44.506802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:32.149 passed 00:05:32.149 Test: blob_delete ...passed 00:05:32.149 Test: blob_resize_test ...[2024-07-23 06:18:44.575878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:32.149 passed 00:05:32.149 Test: blob_resize_thin_test ...passed 00:05:32.149 Test: channel_ops ...passed 00:05:32.408 Test: blob_super ...passed 00:05:32.408 Test: blob_rw_verify_iov ...passed 00:05:32.408 Test: blob_unmap ...passed 00:05:32.408 Test: blob_iter ...passed 00:05:32.408 Test: blob_parse_md ...passed 00:05:32.408 Test: bs_load_pending_removal ...passed 00:05:32.408 Test: bs_unload ...[2024-07-23 06:18:44.891203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:32.408 passed 00:05:32.667 Test: bs_usable_clusters ...passed 00:05:32.667 Test: blob_crc ...[2024-07-23 06:18:44.959817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:32.667 [2024-07-23 06:18:44.959879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:32.667 passed 00:05:32.667 Test: blob_flags ...passed 00:05:32.667 Test: bs_version ...passed 00:05:32.667 Test: blob_set_xattrs_test ...[2024-07-23 06:18:45.065952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:32.667 [2024-07-23 06:18:45.066028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:32.667 passed 00:05:32.667 Test: blob_thin_prov_alloc ...passed 00:05:32.667 Test: blob_insert_cluster_msg_test ...passed 00:05:32.926 Test: blob_thin_prov_rw ...passed 00:05:32.926 Test: blob_thin_prov_rle ...passed 00:05:32.926 Test: blob_thin_prov_rw_iov ...passed 00:05:32.926 Test: blob_snapshot_rw ...passed 00:05:32.926 Test: blob_snapshot_rw_iov ...passed 00:05:32.926 Test: blob_inflate_rw ...passed 00:05:33.185 Test: blob_snapshot_freeze_io ...passed 00:05:33.185 Test: blob_operation_split_rw ...passed 00:05:33.185 Test: blob_operation_split_rw_iov ...passed 00:05:33.185 Test: blob_simultaneous_operations ...[2024-07-23 06:18:45.597895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.185 [2024-07-23 06:18:45.597962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.185 [2024-07-23 06:18:45.598246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.185 [2024-07-23 06:18:45.598266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.185 [2024-07-23 06:18:45.600492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.185 [2024-07-23 06:18:45.600510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.185 [2024-07-23 06:18:45.600526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.185 [2024-07-23 06:18:45.600534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.185 passed 00:05:33.185 Test: blob_persist_test ...passed 00:05:33.185 Test: blob_decouple_snapshot ...passed 00:05:33.444 Test: blob_seek_io_unit ...passed 00:05:33.444 Test: blob_nested_freezes ...passed 00:05:33.444 Test: blob_clone_resize ...passed 00:05:33.444 Test: blob_shallow_copy ...[2024-07-23 06:18:45.828302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:33.444 [2024-07-23 06:18:45.828418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:33.444 [2024-07-23 06:18:45.828429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:33.444 passed 00:05:33.444 Suite: blob_blob_copy_noextent 00:05:33.444 Test: blob_write ...passed 00:05:33.444 Test: blob_read ...passed 00:05:33.444 Test: blob_rw_verify ...passed 00:05:33.705 Test: blob_rw_verify_iov_nomem ...passed 00:05:33.705 Test: blob_rw_iov_read_only ...passed 00:05:33.705 Test: blob_xattr ...passed 00:05:33.705 Test: blob_dirty_shutdown ...passed 00:05:33.705 Test: blob_is_degraded ...passed 00:05:33.705 Suite: blob_esnap_bs_copy_noextent 00:05:33.705 Test: blob_esnap_create ...passed 00:05:33.705 Test: blob_esnap_thread_add_remove ...passed 00:05:33.963 Test: blob_esnap_clone_snapshot ...passed 00:05:33.963 Test: blob_esnap_clone_inflate ...passed 00:05:33.963 Test: blob_esnap_clone_decouple ...passed 00:05:33.963 Test: blob_esnap_clone_reload ...passed 00:05:33.963 Test: blob_esnap_hotplug ...passed 00:05:33.964 Test: blob_set_parent ...[2024-07-23 06:18:46.388695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:33.964 [2024-07-23 06:18:46.388763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:33.964 [2024-07-23 06:18:46.388787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:33.964 [2024-07-23 06:18:46.388798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:33.964 [2024-07-23 06:18:46.388849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:33.964 passed 00:05:33.964 Test: blob_set_external_parent ...[2024-07-23 06:18:46.421362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:33.964 [2024-07-23 06:18:46.421408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:33.964 [2024-07-23 06:18:46.421417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:33.964 [2024-07-23 06:18:46.421465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:33.964 passed 00:05:33.964 Suite: blob_copy_extent 00:05:33.964 Test: blob_init ...[2024-07-23 06:18:46.432429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:33.964 passed 00:05:33.964 Test: blob_thin_provision ...passed 00:05:33.964 Test: blob_read_only ...passed 00:05:34.223 Test: bs_load ...[2024-07-23 06:18:46.479793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:34.223 passed 00:05:34.223 Test: bs_load_custom_cluster_size ...passed 00:05:34.223 Test: bs_load_after_failed_grow ...passed 00:05:34.223 Test: bs_cluster_sz ...[2024-07-23 06:18:46.503947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:34.223 [2024-07-23 06:18:46.504017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:34.223 [2024-07-23 06:18:46.504044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:34.223 passed 00:05:34.223 Test: bs_resize_md ...passed 00:05:34.223 Test: bs_destroy ...passed 00:05:34.223 Test: bs_type ...passed 00:05:34.223 Test: bs_super_block ...passed 00:05:34.223 Test: bs_test_recover_cluster_count ...passed 00:05:34.223 Test: bs_grow_live ...passed 00:05:34.223 Test: bs_grow_live_no_space ...passed 00:05:34.223 Test: bs_test_grow ...passed 00:05:34.223 Test: blob_serialize_test ...passed 00:05:34.223 Test: super_block_crc ...passed 00:05:34.223 Test: blob_thin_prov_write_count_io ...passed 00:05:34.223 Test: blob_thin_prov_unmap_cluster ...passed 00:05:34.223 Test: bs_load_iter_test ...passed 00:05:34.223 Test: blob_relations ...[2024-07-23 06:18:46.672605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:34.223 [2024-07-23 06:18:46.672696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 [2024-07-23 06:18:46.672825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:34.223 [2024-07-23 06:18:46.672836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 passed 00:05:34.223 Test: blob_relations2 ...[2024-07-23 06:18:46.684485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:34.223 [2024-07-23 06:18:46.684510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 [2024-07-23 06:18:46.684519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:34.223 [2024-07-23 06:18:46.684526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 [2024-07-23 06:18:46.684687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:34.223 [2024-07-23 06:18:46.684699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 [2024-07-23 06:18:46.684737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:34.223 [2024-07-23 06:18:46.684746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 passed 00:05:34.223 Test: blob_relations3 ...passed 00:05:34.482 Test: blobstore_clean_power_failure ...passed 00:05:34.482 Test: blob_delete_snapshot_power_failure ...[2024-07-23 06:18:46.843426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:34.482 [2024-07-23 06:18:46.854889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:34.482 [2024-07-23 06:18:46.866583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:34.482 [2024-07-23 06:18:46.866622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:34.482 [2024-07-23 06:18:46.866631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.482 [2024-07-23 06:18:46.877732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:34.482 [2024-07-23 06:18:46.877763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:34.482 [2024-07-23 06:18:46.877772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:34.482 [2024-07-23 06:18:46.877779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.482 [2024-07-23 06:18:46.888495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:34.482 [2024-07-23 06:18:46.888527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:34.482 [2024-07-23 06:18:46.888534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:34.482 [2024-07-23 06:18:46.888542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.482 [2024-07-23 06:18:46.899257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:34.482 [2024-07-23 06:18:46.899293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.482 [2024-07-23 06:18:46.910272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:34.482 [2024-07-23 06:18:46.910309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.482 [2024-07-23 06:18:46.921673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:34.482 [2024-07-23 06:18:46.921738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.482 passed 00:05:34.482 Test: blob_create_snapshot_power_failure ...[2024-07-23 06:18:46.956073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:34.482 [2024-07-23 06:18:46.967419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:34.482 [2024-07-23 06:18:46.990272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:34.740 [2024-07-23 06:18:47.001429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:34.740 passed 00:05:34.740 Test: blob_io_unit ...passed 00:05:34.740 Test: blob_io_unit_compatibility ...passed 00:05:34.740 Test: blob_ext_md_pages ...passed 00:05:34.740 Test: blob_esnap_io_4096_4096 ...passed 00:05:34.740 Test: blob_esnap_io_512_512 ...passed 00:05:34.740 Test: blob_esnap_io_4096_512 ...passed 00:05:34.740 Test: blob_esnap_io_512_4096 ...passed 00:05:34.740 Test: blob_esnap_clone_resize ...passed 00:05:34.740 Suite: blob_bs_copy_extent 00:05:34.740 Test: blob_open ...passed 00:05:34.740 Test: blob_create ...[2024-07-23 06:18:47.250345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:34.998 passed 00:05:34.998 Test: blob_create_loop ...passed 00:05:34.998 Test: blob_create_fail ...[2024-07-23 06:18:47.333506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:34.999 passed 00:05:34.999 Test: blob_create_internal ...passed 00:05:34.999 Test: blob_create_zero_extent ...passed 00:05:34.999 Test: blob_snapshot ...passed 00:05:34.999 Test: blob_clone ...passed 00:05:34.999 Test: blob_inflate ...[2024-07-23 06:18:47.509431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:35.257 passed 00:05:35.257 Test: blob_delete ...passed 00:05:35.257 Test: blob_resize_test ...[2024-07-23 06:18:47.575617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:35.257 passed 00:05:35.257 Test: blob_resize_thin_test ...passed 00:05:35.257 Test: channel_ops ...passed 00:05:35.257 Test: blob_super ...passed 00:05:35.257 Test: blob_rw_verify_iov ...passed 00:05:35.257 Test: blob_unmap ...passed 00:05:35.516 Test: blob_iter ...passed 00:05:35.516 Test: blob_parse_md ...passed 00:05:35.516 Test: bs_load_pending_removal ...passed 00:05:35.516 Test: bs_unload ...[2024-07-23 06:18:47.886532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:35.516 passed 00:05:35.516 Test: bs_usable_clusters ...passed 00:05:35.516 Test: blob_crc ...[2024-07-23 06:18:47.954537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:35.516 [2024-07-23 06:18:47.954587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:35.516 passed 00:05:35.516 Test: blob_flags ...passed 00:05:35.774 Test: bs_version ...passed 00:05:35.774 Test: blob_set_xattrs_test ...[2024-07-23 06:18:48.060465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:35.774 [2024-07-23 06:18:48.060514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:35.774 passed 00:05:35.774 Test: blob_thin_prov_alloc ...passed 00:05:35.774 Test: blob_insert_cluster_msg_test ...passed 00:05:35.774 Test: blob_thin_prov_rw ...passed 00:05:35.774 Test: blob_thin_prov_rle ...passed 00:05:35.774 Test: blob_thin_prov_rw_iov ...passed 00:05:36.033 Test: blob_snapshot_rw ...passed 00:05:36.033 Test: blob_snapshot_rw_iov ...passed 00:05:36.033 Test: blob_inflate_rw ...passed 00:05:36.033 Test: blob_snapshot_freeze_io ...passed 00:05:36.033 Test: blob_operation_split_rw ...passed 00:05:36.292 Test: blob_operation_split_rw_iov ...passed 00:05:36.292 Test: blob_simultaneous_operations ...[2024-07-23 06:18:48.597382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:36.292 [2024-07-23 06:18:48.597446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:36.292 [2024-07-23 06:18:48.597761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:36.292 [2024-07-23 06:18:48.597771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:36.292 [2024-07-23 06:18:48.600047] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:36.292 [2024-07-23 06:18:48.600064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:36.292 [2024-07-23 06:18:48.600080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:36.292 [2024-07-23 06:18:48.600087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:36.292 passed 00:05:36.292 Test: blob_persist_test ...passed 00:05:36.292 Test: blob_decouple_snapshot ...passed 00:05:36.293 Test: blob_seek_io_unit ...passed 00:05:36.293 Test: blob_nested_freezes ...passed 00:05:36.293 Test: blob_clone_resize ...passed 00:05:36.551 Test: blob_shallow_copy ...[2024-07-23 06:18:48.821902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:36.551 [2024-07-23 06:18:48.822012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:36.551 [2024-07-23 06:18:48.822023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:36.551 passed 00:05:36.551 Suite: blob_blob_copy_extent 00:05:36.551 Test: blob_write ...passed 00:05:36.551 Test: blob_read ...passed 00:05:36.551 Test: blob_rw_verify ...passed 00:05:36.551 Test: blob_rw_verify_iov_nomem ...passed 00:05:36.551 Test: blob_rw_iov_read_only ...passed 00:05:36.551 Test: blob_xattr ...passed 00:05:36.809 Test: blob_dirty_shutdown ...passed 00:05:36.809 Test: blob_is_degraded ...passed 00:05:36.809 Suite: blob_esnap_bs_copy_extent 00:05:36.809 Test: blob_esnap_create ...passed 00:05:36.809 Test: blob_esnap_thread_add_remove ...passed 00:05:36.809 Test: blob_esnap_clone_snapshot ...passed 00:05:36.809 Test: blob_esnap_clone_inflate ...passed 00:05:36.809 Test: blob_esnap_clone_decouple ...passed 00:05:37.068 Test: blob_esnap_clone_reload ...passed 00:05:37.068 Test: blob_esnap_hotplug ...passed 00:05:37.068 Test: blob_set_parent ...[2024-07-23 06:18:49.385833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:37.068 [2024-07-23 06:18:49.385891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:37.068 [2024-07-23 06:18:49.386080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:37.068 [2024-07-23 06:18:49.386094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:37.068 [2024-07-23 06:18:49.386149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:37.068 passed 00:05:37.068 Test: blob_set_external_parent ...[2024-07-23 06:18:49.419665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:37.068 [2024-07-23 06:18:49.419753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:37.068 [2024-07-23 06:18:49.419779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:37.068 [2024-07-23 06:18:49.419825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:37.068 passed 00:05:37.068 00:05:37.068 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.068 suites 16 16 n/a 0 0 00:05:37.068 tests 376 376 376 0 0 00:05:37.068 asserts 143973 143973 143973 0 n/a 00:05:37.068 00:05:37.068 Elapsed time = 11.930 seconds 00:05:37.068 06:18:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:05:37.068 00:05:37.068 00:05:37.068 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.068 http://cunit.sourceforge.net/ 00:05:37.068 00:05:37.068 00:05:37.068 Suite: blob_bdev 00:05:37.068 Test: create_bs_dev ...passed 00:05:37.068 Test: create_bs_dev_ro ...[2024-07-23 06:18:49.442353] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:05:37.068 passed 00:05:37.068 Test: create_bs_dev_rw ...passed 00:05:37.068 Test: claim_bs_dev ...passed 00:05:37.068 Test: claim_bs_dev_ro ...passed 00:05:37.068 Test: deferred_destroy_refs ...passed 00:05:37.068 Test: deferred_destroy_channels ...passed 00:05:37.068 Test: deferred_destroy_threads ...passed 00:05:37.068 00:05:37.068 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.068 suites 1 1 n/a 0 0 00:05:37.068 tests 8 8 8 0 0 00:05:37.068 asserts 119 119 119 0 n/a 00:05:37.068 00:05:37.068 Elapsed time = 0.000 seconds 00:05:37.068 [2024-07-23 06:18:49.442651] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:05:37.068 06:18:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:05:37.068 00:05:37.068 00:05:37.068 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.068 http://cunit.sourceforge.net/ 00:05:37.068 00:05:37.068 00:05:37.069 Suite: tree 00:05:37.069 Test: blobfs_tree_op_test ...passed 00:05:37.069 00:05:37.069 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.069 suites 1 1 n/a 0 0 00:05:37.069 tests 1 1 1 0 0 00:05:37.069 asserts 27 27 27 0 n/a 00:05:37.069 00:05:37.069 Elapsed time = 0.000 seconds 00:05:37.069 06:18:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:05:37.069 00:05:37.069 00:05:37.069 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.069 http://cunit.sourceforge.net/ 00:05:37.069 00:05:37.069 00:05:37.069 Suite: blobfs_async_ut 00:05:37.069 Test: fs_init ...passed 00:05:37.069 Test: fs_open ...passed 00:05:37.069 Test: fs_create ...passed 00:05:37.069 Test: fs_truncate ...passed 00:05:37.069 Test: fs_rename ...[2024-07-23 06:18:49.549474] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:05:37.069 passed 00:05:37.069 Test: fs_rw_async ...passed 00:05:37.069 Test: fs_writev_readv_async ...passed 00:05:37.069 Test: tree_find_buffer_ut ...passed 00:05:37.328 Test: channel_ops ...passed 00:05:37.328 Test: channel_ops_sync ...passed 00:05:37.328 00:05:37.328 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.328 suites 1 1 n/a 0 0 00:05:37.328 tests 10 10 10 0 0 00:05:37.328 asserts 292 292 292 0 n/a 00:05:37.328 00:05:37.328 Elapsed time = 0.148 seconds 00:05:37.328 06:18:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:05:37.328 00:05:37.328 00:05:37.328 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.328 http://cunit.sourceforge.net/ 00:05:37.328 00:05:37.328 00:05:37.328 Suite: blobfs_sync_ut 00:05:37.328 Test: cache_read_after_write ...[2024-07-23 06:18:49.661397] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:05:37.328 passed 00:05:37.328 Test: file_length ...passed 00:05:37.328 Test: append_write_to_extend_blob ...passed 00:05:37.328 Test: partial_buffer ...passed 00:05:37.328 Test: cache_write_null_buffer ...passed 00:05:37.328 Test: fs_create_sync ...passed 00:05:37.328 Test: fs_rename_sync ...passed 00:05:37.328 Test: cache_append_no_cache ...passed 00:05:37.328 Test: fs_delete_file_without_close ...passed 00:05:37.328 00:05:37.328 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.328 suites 1 1 n/a 0 0 00:05:37.328 tests 9 9 9 0 0 00:05:37.328 asserts 345 345 345 0 n/a 00:05:37.328 00:05:37.328 Elapsed time = 0.281 seconds 00:05:37.328 06:18:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:05:37.328 00:05:37.328 00:05:37.328 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.328 http://cunit.sourceforge.net/ 00:05:37.328 00:05:37.328 00:05:37.329 Suite: blobfs_bdev_ut 00:05:37.329 Test: spdk_blobfs_bdev_detect_test ...passed 00:05:37.329 Test: spdk_blobfs_bdev_create_test ...passed 00:05:37.329 Test: spdk_blobfs_bdev_mount_test ...passed 00:05:37.329 00:05:37.329 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.329 suites 1 1 n/a 0 0 00:05:37.329 tests 3 3 3 0 0 00:05:37.329 asserts 9 9 9 0 n/a 00:05:37.329 00:05:37.329 Elapsed time = 0.000 seconds 00:05:37.329 [2024-07-23 06:18:49.768939] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:37.329 [2024-07-23 06:18:49.769133] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:37.329 00:05:37.329 real 0m12.290s 00:05:37.329 user 0m12.245s 00:05:37.329 sys 0m0.182s 00:05:37.329 06:18:49 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.329 ************************************ 00:05:37.329 END TEST unittest_blob_blobfs 00:05:37.329 ************************************ 00:05:37.329 06:18:49 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:05:37.329 06:18:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.329 06:18:49 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:05:37.329 06:18:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.329 06:18:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.329 06:18:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.329 ************************************ 00:05:37.329 START TEST unittest_event 00:05:37.329 ************************************ 00:05:37.329 06:18:49 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:05:37.329 06:18:49 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:05:37.329 00:05:37.329 00:05:37.329 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.329 http://cunit.sourceforge.net/ 00:05:37.329 00:05:37.329 00:05:37.329 Suite: app_suite 00:05:37.329 Test: test_spdk_app_parse_args ...app_ut [options] 00:05:37.329 00:05:37.329 CPU options: 00:05:37.329 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:37.329 (like [0,1,10]) 00:05:37.329 --lcores lcore to CPU mapping list. The list is in the format: 00:05:37.329 [<,lcores[@CPUs]>...] 00:05:37.329 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:37.329 Within the group, '-' is used for range separator, 00:05:37.329 ',' is used for single number separator. 00:05:37.329 app_ut: invalid option -- z 00:05:37.329 '( )' can be omitted for single element group, 00:05:37.329 '@' can be omitted if cpus and lcores have the same value 00:05:37.329 --disable-cpumask-locks Disable CPU core lock files. 00:05:37.329 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:37.329 pollers in the app support interrupt mode) 00:05:37.329 -p, --main-core main (primary) core for DPDK 00:05:37.329 00:05:37.329 Configuration options: 00:05:37.329 -c, --config, --json JSON config file 00:05:37.329 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:37.329 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:37.329 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:37.329 --rpcs-allowed comma-separated list of permitted RPCS 00:05:37.329 --json-ignore-init-errors don't exit on invalid config entry 00:05:37.329 00:05:37.329 Memory options: 00:05:37.329 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:37.329 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:37.329 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:37.329 -R, --huge-unlink unlink huge files after initialization 00:05:37.329 -n, --mem-channels number of memory channels used for DPDK 00:05:37.329 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:05:37.329 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:37.329 --no-huge run without using hugepages 00:05:37.329 -i, --shm-id shared memory ID (optional) 00:05:37.329 -g, --single-file-segments force creating just one hugetlbfs file 00:05:37.329 00:05:37.329 PCI options: 00:05:37.329 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:37.329 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:37.329 -u, --no-pci disable PCI access 00:05:37.329 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:37.329 00:05:37.329 Log options: 00:05:37.329 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:37.329 --silence-noticelog disable notice level logging to stderr 00:05:37.329 00:05:37.329 Trace options: 00:05:37.329 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:37.329 setting 0 to disable trace (default 32768) 00:05:37.329 Tracepoints vary in size and can use more than one trace entry. 00:05:37.329 -e, --tpoint-group [:] 00:05:37.329 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:37.329 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:37.329 a tracepoint group. First tpoint inside a group can be enabled by 00:05:37.329 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:37.329 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:37.329 in /include/spdk_internal/trace_defs.h 00:05:37.329 00:05:37.329 Other options: 00:05:37.329 -h, --help show this usage 00:05:37.329 -v, --version print SPDK version 00:05:37.329 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:37.329 --env-context Opaque context for use of the env implementation 00:05:37.329 app_ut [options] 00:05:37.329 00:05:37.329 CPU options: 00:05:37.329 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:37.329 (like [0,1,10]) 00:05:37.329 --lcores lcore to CPU mapping list. The list is in the format: 00:05:37.329 [<,lcores[@CPUs]>...] 00:05:37.329 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:37.329 Within the group, '-' is used for range separator, 00:05:37.329 ',' is used for single number separator. 00:05:37.329 '( )' can be omitted for single element group, 00:05:37.329 '@' can be omitted if cpus and lcores have the same value 00:05:37.329 --disable-cpumask-locks Disable CPU core lock files. 00:05:37.329 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:37.329 pollers in the app support interrupt mode) 00:05:37.329 -p, --main-core main (primary) core for DPDK 00:05:37.329 00:05:37.329 Configuration options: 00:05:37.329 -c, --config, --json JSON config file 00:05:37.329 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:37.329 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:37.329 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:37.329 --rpcs-allowed comma-separated list of permitted RPCS 00:05:37.329 --json-ignore-init-errors don't exit on invalid config entry 00:05:37.329 00:05:37.329 Memory options: 00:05:37.329 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:37.330 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:37.330 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:37.330 -R, --huge-unlink unlink huge files after initialization 00:05:37.330 -n, --mem-channels number of memory channels used for DPDK 00:05:37.330 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:05:37.330 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:37.330 --no-huge run without using hugepages 00:05:37.330 -i, --shm-id shared memory ID (optional) 00:05:37.330 -g, --single-file-segments force creating just one hugetlbfs file 00:05:37.330 00:05:37.330 PCI options: 00:05:37.330 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:37.330 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:37.330 -u, --no-pci disable PCI access 00:05:37.330 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:37.330 00:05:37.330 Log options: 00:05:37.330 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:37.330 --silence-noticelog disable notice level logging to stderr 00:05:37.330 00:05:37.330 Trace options: 00:05:37.330 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:37.330 setting 0 to disable trace (default 32768) 00:05:37.330 Tracepoints vary in size and can use more than one trace entry. 00:05:37.330 -e, --tpoint-group [:] 00:05:37.330 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:37.330 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:37.330 a tracepoint group. First tpoint inside a group can be enabled by 00:05:37.330 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:37.330 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:37.330 in /include/spdk_internal/trace_defs.h 00:05:37.330 00:05:37.330 Other options: 00:05:37.330 -h, --help show this usage 00:05:37.330 -v, --version print SPDK version 00:05:37.330 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:37.330 --env-context Opaque context for use of the env implementation 00:05:37.330 app_ut: unrecognized option `--test-long-opt' 00:05:37.330 [2024-07-23 06:18:49.816425] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:05:37.330 app_ut [options] 00:05:37.330 00:05:37.330 CPU options: 00:05:37.330 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:37.330 (like [0,1,10]) 00:05:37.330 --lcores lcore to CPU mapping list. The list is in the format: 00:05:37.330 [<,lcores[@CPUs]>...] 00:05:37.330 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:37.330 Within the group, '-' is used for range separator, 00:05:37.330 ',' is used for single number separator. 00:05:37.330 '( )' can be omitted for single element group, 00:05:37.330 '@' can be omitted if cpus and lcores have the same value 00:05:37.330 --disable-cpumask-locks Disable CPU core lock files. 00:05:37.330 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:37.330 pollers in the app support interrupt mode) 00:05:37.330 -p, --main-core main (primary) core for DPDK 00:05:37.330 00:05:37.330 Configuration options: 00:05:37.330 -c, --config, --json JSON config file 00:05:37.330 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:37.330 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:37.330 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:37.330 --rpcs-allowed comma-separated list of permitted RPCS 00:05:37.330 --json-ignore-init-errors don't exit on invalid config entry 00:05:37.330 00:05:37.330 Memory options: 00:05:37.330 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:37.330 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:37.330 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:37.330 -R, --huge-unlink unlink huge files after initialization 00:05:37.330 -n, --mem-channels number of memory channels used for DPDK 00:05:37.330 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:05:37.330 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:37.330 --no-huge run without using hugepages 00:05:37.330 -i, --shm-id shared memory ID (optional) 00:05:37.330 -g, --single-file-segments force creating just one hugetlbfs file 00:05:37.330 00:05:37.330 PCI options: 00:05:37.330 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:37.330 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:37.330 -u, --no-pci disable PCI access 00:05:37.330 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:37.330 00:05:37.330 Log options: 00:05:37.330 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:37.330 --silence-noticelog disable notice level logging to stderr 00:05:37.330 00:05:37.330 Trace options: 00:05:37.330 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:37.330 setting 0 to disable trace (default 32768) 00:05:37.330 Tracepoints vary in size and can use more than one trace entry. 00:05:37.330 -e, --tpoint-group [:] 00:05:37.330 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:37.330 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:37.330 a tracepoint group. First tpoint inside a group can be enabled by 00:05:37.330 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:37.330 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:37.330 in /include/spdk_internal/trace_defs.h 00:05:37.330 00:05:37.330 Other options: 00:05:37.330 -h, --help show this usage 00:05:37.330 -v, --version print SPDK version 00:05:37.330 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:37.330 --env-context Opaque context for use of the env implementation 00:05:37.330 passed 00:05:37.330 00:05:37.330 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.330 suites 1 1 n/a 0 0 00:05:37.330 tests 1 1 1 0 0 00:05:37.330 asserts 8 8 8 0 n/a 00:05:37.330 00:05:37.330 Elapsed time = 0.000 seconds 00:05:37.330 [2024-07-23 06:18:49.816684] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:05:37.330 [2024-07-23 06:18:49.816792] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:05:37.330 06:18:49 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:05:37.330 00:05:37.330 00:05:37.330 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.330 http://cunit.sourceforge.net/ 00:05:37.330 00:05:37.330 00:05:37.330 Suite: app_suite 00:05:37.330 Test: test_create_reactor ...passed 00:05:37.330 Test: test_init_reactors ...passed 00:05:37.330 Test: test_event_call ...passed 00:05:37.330 Test: test_schedule_thread ...passed 00:05:37.330 Test: test_reschedule_thread ...passed 00:05:37.330 Test: test_bind_thread ...passed 00:05:37.330 Test: test_for_each_reactor ...passed 00:05:37.330 Test: test_reactor_stats ...passed 00:05:37.330 Test: test_scheduler ...passed 00:05:37.330 Test: test_governor ...passed 00:05:37.330 00:05:37.330 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.330 suites 1 1 n/a 0 0 00:05:37.330 tests 10 10 10 0 0 00:05:37.330 asserts 336 336 336 0 n/a 00:05:37.330 00:05:37.330 Elapsed time = 0.008 seconds 00:05:37.330 00:05:37.330 real 0m0.017s 00:05:37.330 user 0m0.009s 00:05:37.330 sys 0m0.009s 00:05:37.330 06:18:49 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.330 ************************************ 00:05:37.330 END TEST unittest_event 00:05:37.330 ************************************ 00:05:37.330 06:18:49 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:05:37.590 06:18:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.590 06:18:49 unittest -- unit/unittest.sh@235 -- # uname -s 00:05:37.590 06:18:49 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:05:37.590 06:18:49 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:37.590 06:18:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.590 06:18:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.590 06:18:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.590 ************************************ 00:05:37.590 START TEST unittest_accel 00:05:37.590 ************************************ 00:05:37.590 06:18:49 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:37.590 00:05:37.590 00:05:37.590 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.590 http://cunit.sourceforge.net/ 00:05:37.590 00:05:37.590 00:05:37.590 Suite: accel_sequence 00:05:37.590 Test: test_sequence_fill_copy ...passed 00:05:37.590 Test: test_sequence_abort ...passed 00:05:37.590 Test: test_sequence_append_error ...passed 00:05:37.590 Test: test_sequence_completion_error ...[2024-07-23 06:18:49.883081] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x1ce0cf4ce9c0 00:05:37.590 passed 00:05:37.590 Test: test_sequence_decompress ...[2024-07-23 06:18:49.883271] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x1ce0cf4ce9c0 00:05:37.590 [2024-07-23 06:18:49.883287] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1870:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x1ce0cf4ce9c0 00:05:37.590 [2024-07-23 06:18:49.883301] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1870:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x1ce0cf4ce9c0 00:05:37.590 passed 00:05:37.590 Test: test_sequence_reverse ...passed 00:05:37.590 Test: test_sequence_copy_elision ...passed 00:05:37.590 Test: test_sequence_accel_buffers ...passed 00:05:37.590 Test: test_sequence_memory_domain ...passed 00:05:37.591 Test: test_sequence_module_memory_domain ...[2024-07-23 06:18:49.884787] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1762:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:05:37.591 [2024-07-23 06:18:49.884821] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1801:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:05:37.591 passed 00:05:37.591 Test: test_sequence_crypto ...passed 00:05:37.591 Test: test_sequence_driver ...[2024-07-23 06:18:49.885598] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1909:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x1ce0cf4ce240 using driver: ut 00:05:37.591 [2024-07-23 06:18:49.885628] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1974:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x1ce0cf4ce240 through driver: ut 00:05:37.591 passed 00:05:37.591 Test: test_sequence_same_iovs ...passed 00:05:37.591 Test: test_sequence_crc32 ...passed 00:05:37.591 Suite: accel 00:05:37.591 Test: test_spdk_accel_task_complete ...passed 00:05:37.591 Test: test_get_task ...passed 00:05:37.591 Test: test_spdk_accel_submit_copy ...passed 00:05:37.591 Test: test_spdk_accel_submit_dualcast ...[2024-07-23 06:18:49.886266] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:37.591 [2024-07-23 06:18:49.886281] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:37.591 passed 00:05:37.591 Test: test_spdk_accel_submit_compare ...passed 00:05:37.591 Test: test_spdk_accel_submit_fill ...passed 00:05:37.591 Test: test_spdk_accel_submit_crc32c ...passed 00:05:37.591 Test: test_spdk_accel_submit_crc32cv ...passed 00:05:37.591 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:05:37.591 Test: test_spdk_accel_submit_xor ...passed 00:05:37.591 Test: test_spdk_accel_module_find_by_name ...passed 00:05:37.591 Test: test_spdk_accel_module_register ...passed 00:05:37.591 00:05:37.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.591 suites 2 2 n/a 0 0 00:05:37.591 tests 26 26 26 0 0 00:05:37.591 asserts 830 830 830 0 n/a 00:05:37.591 00:05:37.591 Elapsed time = 0.008 seconds 00:05:37.591 00:05:37.591 real 0m0.012s 00:05:37.591 user 0m0.004s 00:05:37.591 sys 0m0.011s 00:05:37.591 06:18:49 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.591 06:18:49 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 END TEST unittest_accel 00:05:37.591 ************************************ 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.591 06:18:49 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 START TEST unittest_ioat 00:05:37.591 ************************************ 00:05:37.591 06:18:49 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:37.591 00:05:37.591 00:05:37.591 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.591 http://cunit.sourceforge.net/ 00:05:37.591 00:05:37.591 00:05:37.591 Suite: ioat 00:05:37.591 Test: ioat_state_check ...passed 00:05:37.591 00:05:37.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.591 suites 1 1 n/a 0 0 00:05:37.591 tests 1 1 1 0 0 00:05:37.591 asserts 32 32 32 0 n/a 00:05:37.591 00:05:37.591 Elapsed time = 0.000 seconds 00:05:37.591 00:05:37.591 real 0m0.006s 00:05:37.591 user 0m0.000s 00:05:37.591 sys 0m0.008s 00:05:37.591 06:18:49 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.591 06:18:49 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 END TEST unittest_ioat 00:05:37.591 ************************************ 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.591 06:18:49 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:37.591 06:18:49 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.591 06:18:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 START TEST unittest_idxd_user 00:05:37.591 ************************************ 00:05:37.591 06:18:49 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:37.591 00:05:37.591 00:05:37.591 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.591 http://cunit.sourceforge.net/ 00:05:37.591 00:05:37.591 00:05:37.591 Suite: idxd_user 00:05:37.591 Test: test_idxd_wait_cmd ...passed 00:05:37.591 Test: test_idxd_reset_dev ...passed 00:05:37.591 Test: test_idxd_group_config ...passed 00:05:37.591 Test: test_idxd_wq_config ...passed 00:05:37.591 00:05:37.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.591 suites 1 1 n/a 0 0 00:05:37.591 tests 4 4 4 0 0 00:05:37.591 asserts 20 20 20 0 n/a 00:05:37.591 00:05:37.591 Elapsed time = 0.000 seconds 00:05:37.591 [2024-07-23 06:18:49.982348] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:37.591 [2024-07-23 06:18:49.982580] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:05:37.591 [2024-07-23 06:18:49.982630] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:37.591 [2024-07-23 06:18:49.982644] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:05:37.591 00:05:37.591 real 0m0.005s 00:05:37.591 user 0m0.004s 00:05:37.591 sys 0m0.004s 00:05:37.591 06:18:49 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.591 ************************************ 00:05:37.591 END TEST unittest_idxd_user 00:05:37.591 ************************************ 00:05:37.591 06:18:49 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 06:18:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.591 06:18:50 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:05:37.591 06:18:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.591 06:18:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.591 06:18:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 START TEST unittest_iscsi 00:05:37.591 ************************************ 00:05:37.591 06:18:50 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:05:37.591 06:18:50 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:05:37.591 00:05:37.591 00:05:37.591 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.591 http://cunit.sourceforge.net/ 00:05:37.591 00:05:37.591 00:05:37.591 Suite: conn_suite 00:05:37.591 Test: read_task_split_in_order_case ...passed 00:05:37.591 Test: read_task_split_reverse_order_case ...passed 00:05:37.591 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:05:37.591 Test: process_non_read_task_completion_test ...passed 00:05:37.591 Test: free_tasks_on_connection ...passed 00:05:37.591 Test: free_tasks_with_queued_datain ...passed 00:05:37.591 Test: abort_queued_datain_task_test ...passed 00:05:37.591 Test: abort_queued_datain_tasks_test ...passed 00:05:37.591 00:05:37.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.591 suites 1 1 n/a 0 0 00:05:37.591 tests 8 8 8 0 0 00:05:37.591 asserts 230 230 230 0 n/a 00:05:37.591 00:05:37.591 Elapsed time = 0.000 seconds 00:05:37.591 06:18:50 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:05:37.591 00:05:37.591 00:05:37.591 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.591 http://cunit.sourceforge.net/ 00:05:37.591 00:05:37.591 00:05:37.591 Suite: iscsi_suite 00:05:37.591 Test: param_negotiation_test ...passed 00:05:37.591 Test: list_negotiation_test ...passed 00:05:37.591 Test: parse_valid_test ...passed 00:05:37.591 Test: parse_invalid_test ...[2024-07-23 06:18:50.038113] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:05:37.592 [2024-07-23 06:18:50.038660] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:05:37.592 [2024-07-23 06:18:50.038718] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:05:37.592 [2024-07-23 06:18:50.038759] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:05:37.592 [2024-07-23 06:18:50.038795] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:05:37.592 passed 00:05:37.592 00:05:37.592 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.592 suites 1 1 n/a 0 0 00:05:37.592 tests 4 4 4 0 0 00:05:37.592 asserts 161 161 161 0 n/a 00:05:37.592 00:05:37.592 Elapsed time = 0.000 seconds 00:05:37.592 [2024-07-23 06:18:50.038815] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:05:37.592 [2024-07-23 06:18:50.038833] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:05:37.592 06:18:50 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:05:37.592 00:05:37.592 00:05:37.592 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.592 http://cunit.sourceforge.net/ 00:05:37.592 00:05:37.592 00:05:37.592 Suite: iscsi_target_node_suite 00:05:37.592 Test: add_lun_test_cases ...[2024-07-23 06:18:50.045370] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:05:37.592 passed 00:05:37.592 Test: allow_any_allowed ...[2024-07-23 06:18:50.045767] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:05:37.592 [2024-07-23 06:18:50.045806] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:37.592 [2024-07-23 06:18:50.045837] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:37.592 [2024-07-23 06:18:50.045865] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:05:37.592 passed 00:05:37.592 Test: allow_ipv6_allowed ...passed 00:05:37.592 Test: allow_ipv6_denied ...passed 00:05:37.592 Test: allow_ipv6_invalid ...passed 00:05:37.592 Test: allow_ipv4_allowed ...passed 00:05:37.592 Test: allow_ipv4_denied ...passed 00:05:37.592 Test: allow_ipv4_invalid ...passed 00:05:37.592 Test: node_access_allowed ...passed 00:05:37.592 Test: node_access_denied_by_empty_netmask ...passed 00:05:37.592 Test: node_access_multi_initiator_groups_cases ...passed 00:05:37.592 Test: allow_iscsi_name_multi_maps_case ...passed 00:05:37.592 Test: chap_param_test_cases ...passed 00:05:37.592 00:05:37.592 [2024-07-23 06:18:50.046119] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:05:37.592 [2024-07-23 06:18:50.046161] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:05:37.592 [2024-07-23 06:18:50.046189] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:05:37.592 [2024-07-23 06:18:50.046213] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:05:37.592 [2024-07-23 06:18:50.046232] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:05:37.592 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.592 suites 1 1 n/a 0 0 00:05:37.592 tests 13 13 13 0 0 00:05:37.592 asserts 50 50 50 0 n/a 00:05:37.592 00:05:37.592 Elapsed time = 0.000 seconds 00:05:37.592 06:18:50 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:05:37.592 00:05:37.592 00:05:37.592 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.592 http://cunit.sourceforge.net/ 00:05:37.592 00:05:37.592 00:05:37.592 Suite: iscsi_suite 00:05:37.592 Test: op_login_check_target_test ...passed 00:05:37.592 Test: op_login_session_normal_test ...passed 00:05:37.592 Test: maxburstlength_test ...[2024-07-23 06:18:50.053180] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:05:37.592 [2024-07-23 06:18:50.053340] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:37.592 [2024-07-23 06:18:50.053353] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:37.592 [2024-07-23 06:18:50.053362] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:37.592 [2024-07-23 06:18:50.053386] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:05:37.592 [2024-07-23 06:18:50.053405] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:37.592 [2024-07-23 06:18:50.053436] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:05:37.592 [2024-07-23 06:18:50.053448] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:37.592 passed 00:05:37.592 Test: underflow_for_read_transfer_test ...passed 00:05:37.592 Test: underflow_for_zero_read_transfer_test ...passed 00:05:37.592 Test: underflow_for_request_sense_test ...passed 00:05:37.592 Test: underflow_for_check_condition_test ...passed 00:05:37.592 Test: add_transfer_task_test ...passed 00:05:37.592 Test: get_transfer_task_test ...passed 00:05:37.592 Test: del_transfer_task_test ...passed 00:05:37.592 Test: clear_all_transfer_tasks_test ...passed 00:05:37.592 Test: build_iovs_test ...passed 00:05:37.592 Test: build_iovs_with_md_test ...passed 00:05:37.592 Test: pdu_hdr_op_login_test ...passed 00:05:37.592 Test: pdu_hdr_op_text_test ...passed 00:05:37.592 Test: pdu_hdr_op_logout_test ...passed 00:05:37.592 Test: pdu_hdr_op_scsi_test ...[2024-07-23 06:18:50.053509] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:37.592 [2024-07-23 06:18:50.053523] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:05:37.592 [2024-07-23 06:18:50.053649] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:05:37.592 [2024-07-23 06:18:50.053661] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:05:37.592 [2024-07-23 06:18:50.053671] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:05:37.592 [2024-07-23 06:18:50.053686] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:37.592 [2024-07-23 06:18:50.053695] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:05:37.592 [2024-07-23 06:18:50.053704] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:05:37.592 [2024-07-23 06:18:50.053715] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:05:37.592 [2024-07-23 06:18:50.053728] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:37.592 [2024-07-23 06:18:50.053736] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:37.592 [2024-07-23 06:18:50.053744] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:05:37.592 [2024-07-23 06:18:50.053753] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:37.592 [2024-07-23 06:18:50.053762] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:05:37.592 [2024-07-23 06:18:50.053771] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:05:37.592 passed 00:05:37.592 Test: pdu_hdr_op_task_mgmt_test ...passed 00:05:37.592 Test: pdu_hdr_op_nopout_test ...passed 00:05:37.592 Test: pdu_hdr_op_data_test ...passed 00:05:37.592 Test: empty_text_with_cbit_test ...passed 00:05:37.592 Test: pdu_payload_read_test ...[2024-07-23 06:18:50.053782] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:05:37.592 [2024-07-23 06:18:50.053797] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:05:37.592 [2024-07-23 06:18:50.053810] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:05:37.592 [2024-07-23 06:18:50.053819] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:37.592 [2024-07-23 06:18:50.053827] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:37.592 [2024-07-23 06:18:50.053835] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:05:37.592 [2024-07-23 06:18:50.053845] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:05:37.593 [2024-07-23 06:18:50.053854] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:05:37.593 [2024-07-23 06:18:50.053862] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:37.593 [2024-07-23 06:18:50.053871] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:05:37.593 [2024-07-23 06:18:50.053880] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:05:37.593 [2024-07-23 06:18:50.053888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:05:37.593 [2024-07-23 06:18:50.053896] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:05:37.593 [2024-07-23 06:18:50.054202] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:05:37.593 passed 00:05:37.593 Test: data_out_pdu_sequence_test ...passed 00:05:37.593 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:05:37.593 00:05:37.593 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.593 suites 1 1 n/a 0 0 00:05:37.593 tests 24 24 24 0 0 00:05:37.593 asserts 150253 150253 150253 0 n/a 00:05:37.593 00:05:37.593 Elapsed time = 0.000 seconds 00:05:37.593 06:18:50 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:05:37.593 00:05:37.593 00:05:37.593 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.593 http://cunit.sourceforge.net/ 00:05:37.593 00:05:37.593 00:05:37.593 Suite: init_grp_suite 00:05:37.593 Test: create_initiator_group_success_case ...passed 00:05:37.593 Test: find_initiator_group_success_case ...passed 00:05:37.593 Test: register_initiator_group_twice_case ...passed 00:05:37.593 Test: add_initiator_name_success_case ...passed 00:05:37.593 Test: add_initiator_name_fail_case ...[2024-07-23 06:18:50.061991] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:05:37.593 passed 00:05:37.593 Test: delete_all_initiator_names_success_case ...passed 00:05:37.593 Test: add_netmask_success_case ...passed 00:05:37.593 Test: add_netmask_fail_case ...passed 00:05:37.593 Test: delete_all_netmasks_success_case ...passed 00:05:37.593 Test: initiator_name_overwrite_all_to_any_case ...passed 00:05:37.593 Test: netmask_overwrite_all_to_any_case ...passed 00:05:37.593 Test: add_delete_initiator_names_case ...[2024-07-23 06:18:50.062252] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:05:37.593 passed 00:05:37.593 Test: add_duplicated_initiator_names_case ...passed 00:05:37.593 Test: delete_nonexisting_initiator_names_case ...passed 00:05:37.593 Test: add_delete_netmasks_case ...passed 00:05:37.593 Test: add_duplicated_netmasks_case ...passed 00:05:37.593 Test: delete_nonexisting_netmasks_case ...passed 00:05:37.593 00:05:37.593 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.593 suites 1 1 n/a 0 0 00:05:37.593 tests 17 17 17 0 0 00:05:37.593 asserts 108 108 108 0 n/a 00:05:37.593 00:05:37.593 Elapsed time = 0.000 seconds 00:05:37.593 06:18:50 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:05:37.593 00:05:37.593 00:05:37.593 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.593 http://cunit.sourceforge.net/ 00:05:37.593 00:05:37.593 00:05:37.593 Suite: portal_grp_suite 00:05:37.593 Test: portal_create_ipv4_normal_case ...passed 00:05:37.593 Test: portal_create_ipv6_normal_case ...passed 00:05:37.593 Test: portal_create_ipv4_wildcard_case ...passed 00:05:37.593 Test: portal_create_ipv6_wildcard_case ...passed 00:05:37.593 Test: portal_create_twice_case ...passed 00:05:37.593 Test: portal_grp_register_unregister_case ...passed 00:05:37.593 Test: portal_grp_register_twice_case ...passed 00:05:37.593 Test: portal_grp_add_delete_case ...[2024-07-23 06:18:50.067736] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:05:37.593 passed 00:05:37.593 Test: portal_grp_add_delete_twice_case ...passed 00:05:37.593 00:05:37.593 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.593 suites 1 1 n/a 0 0 00:05:37.593 tests 9 9 9 0 0 00:05:37.593 asserts 44 44 44 0 n/a 00:05:37.593 00:05:37.593 Elapsed time = 0.000 seconds 00:05:37.593 00:05:37.593 real 0m0.041s 00:05:37.593 user 0m0.006s 00:05:37.593 sys 0m0.031s 00:05:37.593 06:18:50 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.593 06:18:50 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:05:37.593 ************************************ 00:05:37.593 END TEST unittest_iscsi 00:05:37.593 ************************************ 00:05:37.593 06:18:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.593 06:18:50 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:05:37.593 06:18:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.593 06:18:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.593 06:18:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.852 ************************************ 00:05:37.852 START TEST unittest_json 00:05:37.852 ************************************ 00:05:37.852 06:18:50 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:05:37.852 06:18:50 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:05:37.852 00:05:37.852 00:05:37.852 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.852 http://cunit.sourceforge.net/ 00:05:37.852 00:05:37.852 00:05:37.852 Suite: json 00:05:37.852 Test: test_parse_literal ...passed 00:05:37.852 Test: test_parse_string_simple ...passed 00:05:37.852 Test: test_parse_string_control_chars ...passed 00:05:37.852 Test: test_parse_string_utf8 ...passed 00:05:37.852 Test: test_parse_string_escapes_twochar ...passed 00:05:37.852 Test: test_parse_string_escapes_unicode ...passed 00:05:37.852 Test: test_parse_number ...passed 00:05:37.852 Test: test_parse_array ...passed 00:05:37.852 Test: test_parse_object ...passed 00:05:37.852 Test: test_parse_nesting ...passed 00:05:37.852 Test: test_parse_comment ...passed 00:05:37.852 00:05:37.852 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.852 suites 1 1 n/a 0 0 00:05:37.852 tests 11 11 11 0 0 00:05:37.852 asserts 1516 1516 1516 0 n/a 00:05:37.852 00:05:37.852 Elapsed time = 0.000 seconds 00:05:37.852 06:18:50 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:05:37.852 00:05:37.852 00:05:37.852 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.852 http://cunit.sourceforge.net/ 00:05:37.852 00:05:37.852 00:05:37.852 Suite: json 00:05:37.852 Test: test_strequal ...passed 00:05:37.852 Test: test_num_to_uint16 ...passed 00:05:37.852 Test: test_num_to_int32 ...passed 00:05:37.852 Test: test_num_to_uint64 ...passed 00:05:37.852 Test: test_decode_object ...passed 00:05:37.852 Test: test_decode_array ...passed 00:05:37.852 Test: test_decode_bool ...passed 00:05:37.852 Test: test_decode_uint16 ...passed 00:05:37.852 Test: test_decode_int32 ...passed 00:05:37.852 Test: test_decode_uint32 ...passed 00:05:37.852 Test: test_decode_uint64 ...passed 00:05:37.852 Test: test_decode_string ...passed 00:05:37.852 Test: test_decode_uuid ...passed 00:05:37.852 Test: test_find ...passed 00:05:37.852 Test: test_find_array ...passed 00:05:37.852 Test: test_iterating ...passed 00:05:37.852 Test: test_free_object ...passed 00:05:37.852 00:05:37.852 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.852 suites 1 1 n/a 0 0 00:05:37.852 tests 17 17 17 0 0 00:05:37.852 asserts 236 236 236 0 n/a 00:05:37.852 00:05:37.852 Elapsed time = 0.000 seconds 00:05:37.852 06:18:50 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:05:37.852 00:05:37.852 00:05:37.852 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.852 http://cunit.sourceforge.net/ 00:05:37.852 00:05:37.852 00:05:37.852 Suite: json 00:05:37.852 Test: test_write_literal ...passed 00:05:37.852 Test: test_write_string_simple ...passed 00:05:37.852 Test: test_write_string_escapes ...passed 00:05:37.852 Test: test_write_string_utf16le ...passed 00:05:37.852 Test: test_write_number_int32 ...passed 00:05:37.852 Test: test_write_number_uint32 ...passed 00:05:37.852 Test: test_write_number_uint128 ...passed 00:05:37.852 Test: test_write_string_number_uint128 ...passed 00:05:37.852 Test: test_write_number_int64 ...passed 00:05:37.852 Test: test_write_number_uint64 ...passed 00:05:37.852 Test: test_write_number_double ...passed 00:05:37.852 Test: test_write_uuid ...passed 00:05:37.852 Test: test_write_array ...passed 00:05:37.852 Test: test_write_object ...passed 00:05:37.852 Test: test_write_nesting ...passed 00:05:37.852 Test: test_write_val ...passed 00:05:37.852 00:05:37.852 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.852 suites 1 1 n/a 0 0 00:05:37.852 tests 16 16 16 0 0 00:05:37.852 asserts 918 918 918 0 n/a 00:05:37.852 00:05:37.852 Elapsed time = 0.000 seconds 00:05:37.852 06:18:50 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:05:37.852 00:05:37.852 00:05:37.852 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.852 http://cunit.sourceforge.net/ 00:05:37.852 00:05:37.852 00:05:37.852 Suite: jsonrpc 00:05:37.852 Test: test_parse_request ...passed 00:05:37.852 Test: test_parse_request_streaming ...passed 00:05:37.852 00:05:37.852 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.852 suites 1 1 n/a 0 0 00:05:37.852 tests 2 2 2 0 0 00:05:37.852 asserts 289 289 289 0 n/a 00:05:37.852 00:05:37.852 Elapsed time = 0.000 seconds 00:05:37.852 00:05:37.852 real 0m0.028s 00:05:37.852 user 0m0.010s 00:05:37.852 sys 0m0.023s 00:05:37.852 06:18:50 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.852 ************************************ 00:05:37.852 END TEST unittest_json 00:05:37.852 ************************************ 00:05:37.852 06:18:50 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.852 06:18:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.852 06:18:50 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:05:37.852 06:18:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.852 06:18:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.852 06:18:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.852 ************************************ 00:05:37.852 START TEST unittest_rpc 00:05:37.852 ************************************ 00:05:37.852 06:18:50 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:05:37.852 06:18:50 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:05:37.852 00:05:37.852 00:05:37.853 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.853 http://cunit.sourceforge.net/ 00:05:37.853 00:05:37.853 00:05:37.853 Suite: rpc 00:05:37.853 Test: test_jsonrpc_handler ...passed 00:05:37.853 Test: test_spdk_rpc_is_method_allowed ...passed 00:05:37.853 Test: test_rpc_get_methods ...[2024-07-23 06:18:50.190083] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:05:37.853 passed 00:05:37.853 Test: test_rpc_spdk_get_version ...passed 00:05:37.853 Test: test_spdk_rpc_listen_close ...passed 00:05:37.853 Test: test_rpc_run_multiple_servers ...passed 00:05:37.853 00:05:37.853 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.853 suites 1 1 n/a 0 0 00:05:37.853 tests 6 6 6 0 0 00:05:37.853 asserts 23 23 23 0 n/a 00:05:37.853 00:05:37.853 Elapsed time = 0.000 seconds 00:05:37.853 00:05:37.853 real 0m0.007s 00:05:37.853 user 0m0.000s 00:05:37.853 sys 0m0.008s 00:05:37.853 06:18:50 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.853 06:18:50 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.853 ************************************ 00:05:37.853 END TEST unittest_rpc 00:05:37.853 ************************************ 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.853 06:18:50 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.853 ************************************ 00:05:37.853 START TEST unittest_notify 00:05:37.853 ************************************ 00:05:37.853 06:18:50 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:37.853 00:05:37.853 00:05:37.853 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.853 http://cunit.sourceforge.net/ 00:05:37.853 00:05:37.853 00:05:37.853 Suite: app_suite 00:05:37.853 Test: notify ...passed 00:05:37.853 00:05:37.853 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.853 suites 1 1 n/a 0 0 00:05:37.853 tests 1 1 1 0 0 00:05:37.853 asserts 13 13 13 0 n/a 00:05:37.853 00:05:37.853 Elapsed time = 0.000 seconds 00:05:37.853 00:05:37.853 real 0m0.005s 00:05:37.853 user 0m0.005s 00:05:37.853 sys 0m0.000s 00:05:37.853 06:18:50 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.853 06:18:50 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:05:37.853 ************************************ 00:05:37.853 END TEST unittest_notify 00:05:37.853 ************************************ 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:37.853 06:18:50 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.853 06:18:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.853 ************************************ 00:05:37.853 START TEST unittest_nvme 00:05:37.853 ************************************ 00:05:37.853 06:18:50 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:05:37.853 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:05:37.853 00:05:37.853 00:05:37.853 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.853 http://cunit.sourceforge.net/ 00:05:37.853 00:05:37.853 00:05:37.853 Suite: nvme 00:05:37.853 Test: test_opc_data_transfer ...passed 00:05:37.853 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:05:37.853 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:05:37.853 Test: test_trid_parse_and_compare ...[2024-07-23 06:18:50.295404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:05:37.853 [2024-07-23 06:18:50.295681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:37.853 [2024-07-23 06:18:50.295710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:05:37.853 [2024-07-23 06:18:50.295726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:37.853 [2024-07-23 06:18:50.295749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:05:37.853 [2024-07-23 06:18:50.295763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:37.853 passed 00:05:37.853 Test: test_trid_trtype_str ...passed 00:05:37.853 Test: test_trid_adrfam_str ...passed 00:05:37.853 Test: test_nvme_ctrlr_probe ...passed 00:05:37.853 Test: test_spdk_nvme_probe ...passed 00:05:37.853 Test: test_spdk_nvme_connect ...[2024-07-23 06:18:50.295917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:37.853 [2024-07-23 06:18:50.295957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:37.853 [2024-07-23 06:18:50.295973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:37.853 [2024-07-23 06:18:50.295991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:05:37.853 [2024-07-23 06:18:50.296006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:37.853 [2024-07-23 06:18:50.296042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:05:37.853 passed 00:05:37.853 Test: test_nvme_ctrlr_probe_internal ...passed 00:05:37.853 Test: test_nvme_init_controllers ...passed 00:05:37.853 Test: test_nvme_driver_init ...[2024-07-23 06:18:50.296163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:37.853 [2024-07-23 06:18:50.296198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:37.853 [2024-07-23 06:18:50.296213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:05:37.853 [2024-07-23 06:18:50.296233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:05:37.853 [2024-07-23 06:18:50.296265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:05:37.853 [2024-07-23 06:18:50.296281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:38.125 passed 00:05:38.125 Test: test_spdk_nvme_detach ...passed 00:05:38.125 Test: test_nvme_completion_poll_cb ...passed 00:05:38.125 Test: test_nvme_user_copy_cmd_complete ...passed 00:05:38.125 Test: test_nvme_allocate_request_null ...passed 00:05:38.125 Test: test_nvme_allocate_request ...passed 00:05:38.125 Test: test_nvme_free_request ...passed 00:05:38.125 Test: test_nvme_allocate_request_user_copy ...[2024-07-23 06:18:50.409211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:05:38.125 passed 00:05:38.125 Test: test_nvme_robust_mutex_init_shared ...passed 00:05:38.125 Test: test_nvme_request_check_timeout ...passed 00:05:38.125 Test: test_nvme_wait_for_completion ...passed 00:05:38.125 Test: test_spdk_nvme_parse_func ...passed 00:05:38.125 Test: test_spdk_nvme_detach_async ...passed 00:05:38.125 Test: test_nvme_parse_addr ...passed 00:05:38.125 00:05:38.125 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.125 suites 1 1 n/a 0 0 00:05:38.125 tests 25 25 25 0 0 00:05:38.125 asserts 326 326 326 0 n/a 00:05:38.125 00:05:38.125 Elapsed time = 0.000 seconds 00:05:38.125 [2024-07-23 06:18:50.409404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:05:38.125 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:05:38.125 00:05:38.125 00:05:38.125 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.125 http://cunit.sourceforge.net/ 00:05:38.125 00:05:38.125 00:05:38.125 Suite: nvme_ctrlr 00:05:38.125 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-23 06:18:50.415073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-23 06:18:50.416625] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-23 06:18:50.417919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-23 06:18:50.419154] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-23 06:18:50.420399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 [2024-07-23 06:18:50.421595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 06:18:50.422806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 06:18:50.423999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:38.125 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-23 06:18:50.426387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 [2024-07-23 06:18:50.428722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 06:18:50.429924] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:38.125 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-23 06:18:50.432335] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 [2024-07-23 06:18:50.433526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 06:18:50.435880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:38.125 Test: test_nvme_ctrlr_init_delay ...[2024-07-23 06:18:50.438278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_alloc_io_qpair_rr_1 ...[2024-07-23 06:18:50.439552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 [2024-07-23 06:18:50.439627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:38.125 passed 00:05:38.125 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:05:38.125 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:05:38.125 Test: test_alloc_io_qpair_wrr_1 ...passed 00:05:38.125 Test: test_alloc_io_qpair_wrr_2 ...passed 00:05:38.125 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-23 06:18:50.439655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:38.125 [2024-07-23 06:18:50.439670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:38.125 [2024-07-23 06:18:50.439684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:38.125 [2024-07-23 06:18:50.439751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 [2024-07-23 06:18:50.439784] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 [2024-07-23 06:18:50.439804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:38.125 [2024-07-23 06:18:50.439844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:05:38.125 [2024-07-23 06:18:50.439861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:38.125 passed 00:05:38.125 Test: test_nvme_ctrlr_fail ...passed 00:05:38.125 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:05:38.125 Test: test_nvme_ctrlr_set_supported_features ...passed 00:05:38.125 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-23 06:18:50.439876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:05:38.125 [2024-07-23 06:18:50.439891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:38.125 [2024-07-23 06:18:50.439908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:05:38.125 [2024-07-23 06:18:50.439940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:05:38.125 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-23 06:18:50.441177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.125 passed 00:05:38.125 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:05:38.125 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:05:38.125 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:05:38.125 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-23 06:18:50.476740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-23 06:18:50.483685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-23 06:18:50.484868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 [2024-07-23 06:18:50.484912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:05:38.126 passed 00:05:38.126 Test: test_alloc_io_qpair_fail ...[2024-07-23 06:18:50.486049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_add_remove_process ...[2024-07-23 06:18:50.486097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:05:38.126 Test: test_nvme_ctrlr_set_state ...[2024-07-23 06:18:50.486131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-23 06:18:50.486148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-23 06:18:50.488748] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-23 06:18:50.495832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_reset ...[2024-07-23 06:18:50.497043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_aer_callback ...[2024-07-23 06:18:50.497156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-23 06:18:50.498386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:05:38.126 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:05:38.126 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-23 06:18:50.499724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:05:38.126 Test: test_nvme_ctrlr_ana_resize ...[2024-07-23 06:18:50.500931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:05:38.126 Test: test_nvme_transport_ctrlr_ready ...[2024-07-23 06:18:50.502180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:05:38.126 passed 00:05:38.126 Test: test_nvme_ctrlr_disable ...[2024-07-23 06:18:50.502214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:05:38.126 [2024-07-23 06:18:50.502231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:38.126 passed 00:05:38.126 00:05:38.126 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.126 suites 1 1 n/a 0 0 00:05:38.126 tests 44 44 44 0 0 00:05:38.126 asserts 10434 10434 10434 0 n/a 00:05:38.126 00:05:38.126 Elapsed time = 0.047 seconds 00:05:38.126 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:05:38.126 00:05:38.126 00:05:38.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.126 http://cunit.sourceforge.net/ 00:05:38.126 00:05:38.126 00:05:38.126 Suite: nvme_ctrlr_cmd 00:05:38.126 Test: test_get_log_pages ...passed 00:05:38.126 Test: test_set_feature_cmd ...passed 00:05:38.126 Test: test_set_feature_ns_cmd ...passed 00:05:38.126 Test: test_get_feature_cmd ...passed 00:05:38.126 Test: test_get_feature_ns_cmd ...passed 00:05:38.126 Test: test_abort_cmd ...passed 00:05:38.126 Test: test_set_host_id_cmds ...passed 00:05:38.126 Test: test_io_cmd_raw_no_payload_build ...passed 00:05:38.126 Test: test_io_raw_cmd ...passed 00:05:38.126 Test: test_io_raw_cmd_with_md ...passed 00:05:38.126 Test: test_namespace_attach ...passed 00:05:38.126 Test: test_namespace_detach ...passed 00:05:38.126 Test: test_namespace_create ...passed 00:05:38.126 Test: test_namespace_delete ...passed 00:05:38.126 Test: test_doorbell_buffer_config ...passed 00:05:38.126 Test: test_format_nvme ...passed 00:05:38.126 Test: test_fw_commit ...passed 00:05:38.126 Test: test_fw_image_download ...passed 00:05:38.126 Test: test_sanitize ...passed 00:05:38.126 Test: test_directive ...passed 00:05:38.126 Test: test_nvme_request_add_abort ...passed 00:05:38.126 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:05:38.126 Test: test_nvme_ctrlr_cmd_identify ...passed 00:05:38.126 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:05:38.126 00:05:38.126 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.126 suites 1 1 n/a 0 0 00:05:38.126 tests 24 24 24 0 0 00:05:38.126 asserts 198 198 198 0 n/a 00:05:38.126 00:05:38.126 Elapsed time = 0.000 seconds 00:05:38.126 [2024-07-23 06:18:50.511953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:05:38.126 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:05:38.126 00:05:38.126 00:05:38.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.126 http://cunit.sourceforge.net/ 00:05:38.126 00:05:38.126 00:05:38.126 Suite: nvme_ctrlr_cmd 00:05:38.126 Test: test_geometry_cmd ...passed 00:05:38.126 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:05:38.126 00:05:38.126 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.126 suites 1 1 n/a 0 0 00:05:38.126 tests 2 2 2 0 0 00:05:38.126 asserts 7 7 7 0 n/a 00:05:38.126 00:05:38.126 Elapsed time = 0.000 seconds 00:05:38.126 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:05:38.126 00:05:38.126 00:05:38.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.126 http://cunit.sourceforge.net/ 00:05:38.126 00:05:38.126 00:05:38.126 Suite: nvme 00:05:38.126 Test: test_nvme_ns_construct ...passed 00:05:38.126 Test: test_nvme_ns_uuid ...passed 00:05:38.126 Test: test_nvme_ns_csi ...passed 00:05:38.126 Test: test_nvme_ns_data ...passed 00:05:38.126 Test: test_nvme_ns_set_identify_data ...passed 00:05:38.126 Test: test_spdk_nvme_ns_get_values ...passed 00:05:38.126 Test: test_spdk_nvme_ns_is_active ...passed 00:05:38.126 Test: spdk_nvme_ns_supports ...passed 00:05:38.126 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:05:38.126 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:05:38.126 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:05:38.126 Test: test_nvme_ns_find_id_desc ...passed 00:05:38.126 00:05:38.126 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.126 suites 1 1 n/a 0 0 00:05:38.126 tests 12 12 12 0 0 00:05:38.126 asserts 95 95 95 0 n/a 00:05:38.126 00:05:38.126 Elapsed time = 0.000 seconds 00:05:38.126 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:05:38.126 00:05:38.126 00:05:38.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.126 http://cunit.sourceforge.net/ 00:05:38.126 00:05:38.126 00:05:38.126 Suite: nvme_ns_cmd 00:05:38.126 Test: split_test ...passed 00:05:38.126 Test: split_test2 ...passed 00:05:38.126 Test: split_test3 ...passed 00:05:38.126 Test: split_test4 ...passed 00:05:38.126 Test: test_nvme_ns_cmd_flush ...passed 00:05:38.126 Test: test_nvme_ns_cmd_dataset_management ...passed 00:05:38.126 Test: test_nvme_ns_cmd_copy ...passed 00:05:38.126 Test: test_io_flags ...passed 00:05:38.127 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:05:38.127 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:05:38.127 Test: test_nvme_ns_cmd_reservation_register ...passed 00:05:38.127 Test: test_nvme_ns_cmd_reservation_release ...passed 00:05:38.127 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:05:38.127 Test: test_nvme_ns_cmd_reservation_report ...passed 00:05:38.127 Test: test_cmd_child_request ...passed 00:05:38.127 Test: test_nvme_ns_cmd_readv ...passed 00:05:38.127 Test: test_nvme_ns_cmd_read_with_md ...passed 00:05:38.127 Test: test_nvme_ns_cmd_writev ...passed 00:05:38.127 Test: test_nvme_ns_cmd_write_with_md ...[2024-07-23 06:18:50.526553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:05:38.127 [2024-07-23 06:18:50.526778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:05:38.127 passed 00:05:38.127 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:05:38.127 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:05:38.127 Test: test_nvme_ns_cmd_comparev ...passed 00:05:38.127 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:05:38.127 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:05:38.127 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:05:38.127 Test: test_nvme_ns_cmd_setup_request ...passed 00:05:38.127 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:05:38.127 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:05:38.127 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:05:38.127 Test: test_nvme_ns_cmd_verify ...passed 00:05:38.127 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:05:38.127 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:05:38.127 00:05:38.127 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.127 suites 1 1 n/a 0 0 00:05:38.127 tests 32 32 32 0 0 00:05:38.127 asserts 550 550 550 0 n/a 00:05:38.127 00:05:38.127 Elapsed time = 0.000 seconds 00:05:38.127 [2024-07-23 06:18:50.526881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:38.127 [2024-07-23 06:18:50.526897] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:38.127 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:05:38.127 00:05:38.127 00:05:38.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.127 http://cunit.sourceforge.net/ 00:05:38.127 00:05:38.127 00:05:38.127 Suite: nvme_ns_cmd 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:05:38.127 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:05:38.127 00:05:38.127 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.127 suites 1 1 n/a 0 0 00:05:38.127 tests 12 12 12 0 0 00:05:38.127 asserts 123 123 123 0 n/a 00:05:38.127 00:05:38.127 Elapsed time = 0.000 seconds 00:05:38.127 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:05:38.127 00:05:38.127 00:05:38.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.127 http://cunit.sourceforge.net/ 00:05:38.127 00:05:38.127 00:05:38.127 Suite: nvme_qpair 00:05:38.127 Test: test3 ...passed 00:05:38.127 Test: test_ctrlr_failed ...passed 00:05:38.127 Test: struct_packing ...passed 00:05:38.127 Test: test_nvme_qpair_process_completions ...[2024-07-23 06:18:50.535570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:38.127 passed 00:05:38.127 Test: test_nvme_completion_is_retry ...passed 00:05:38.127 Test: test_get_status_string ...passed 00:05:38.127 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:05:38.127 Test: test_nvme_qpair_submit_request ...passed 00:05:38.127 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:05:38.127 Test: test_nvme_qpair_manual_complete_request ...passed 00:05:38.127 Test: test_nvme_qpair_init_deinit ...passed 00:05:38.127 Test: test_nvme_get_sgl_print_info ...passed 00:05:38.127 00:05:38.127 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.127 suites 1 1 n/a 0 0 00:05:38.127 tests 12 12 12 0 0 00:05:38.127 asserts 154 154 154 0 n/a 00:05:38.127 00:05:38.127 Elapsed time = 0.000 seconds 00:05:38.127 [2024-07-23 06:18:50.535728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:38.127 [2024-07-23 06:18:50.535780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:05:38.127 [2024-07-23 06:18:50.535794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:05:38.127 [2024-07-23 06:18:50.535861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:38.127 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:05:38.127 00:05:38.127 00:05:38.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.127 http://cunit.sourceforge.net/ 00:05:38.127 00:05:38.127 00:05:38.127 Suite: nvme_pcie 00:05:38.127 Test: test_prp_list_append ...[2024-07-23 06:18:50.539807] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:38.127 passed 00:05:38.127 Test: test_nvme_pcie_hotplug_monitor ...passed 00:05:38.127 Test: test_shadow_doorbell_update ...passed 00:05:38.127 Test: test_build_contig_hw_sgl_request ...passed 00:05:38.127 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:05:38.127 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:05:38.127 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:05:38.127 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:05:38.127 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:05:38.127 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:05:38.127 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:05:38.127 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:05:38.127 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-23 06:18:50.539943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:05:38.127 [2024-07-23 06:18:50.539956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:05:38.127 [2024-07-23 06:18:50.539993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:38.127 [2024-07-23 06:18:50.540010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:38.127 [2024-07-23 06:18:50.540086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:38.127 [2024-07-23 06:18:50.540107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:05:38.127 [2024-07-23 06:18:50.540120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:05:38.127 [2024-07-23 06:18:50.540132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:05:38.127 passed 00:05:38.127 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:05:38.127 00:05:38.127 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.127 suites 1 1 n/a 0 0 00:05:38.127 tests 14 14 14 0 0 00:05:38.127 asserts 235 235 235 0 n/a 00:05:38.127 00:05:38.127 Elapsed time = 0.000 seconds 00:05:38.127 [2024-07-23 06:18:50.540143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:05:38.127 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:05:38.127 00:05:38.127 00:05:38.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.127 http://cunit.sourceforge.net/ 00:05:38.127 00:05:38.127 00:05:38.127 Suite: nvme_ns_cmd 00:05:38.127 Test: nvme_poll_group_create_test ...passed 00:05:38.127 Test: nvme_poll_group_add_remove_test ...passed 00:05:38.127 Test: nvme_poll_group_process_completions ...passed 00:05:38.127 Test: nvme_poll_group_destroy_test ...passed 00:05:38.127 Test: nvme_poll_group_get_free_stats ...passed 00:05:38.127 00:05:38.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.128 suites 1 1 n/a 0 0 00:05:38.128 tests 5 5 5 0 0 00:05:38.128 asserts 75 75 75 0 n/a 00:05:38.128 00:05:38.128 Elapsed time = 0.000 seconds 00:05:38.128 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:05:38.128 00:05:38.128 00:05:38.128 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.128 http://cunit.sourceforge.net/ 00:05:38.128 00:05:38.128 00:05:38.128 Suite: nvme_quirks 00:05:38.128 Test: test_nvme_quirks_striping ...passed 00:05:38.128 00:05:38.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.128 suites 1 1 n/a 0 0 00:05:38.128 tests 1 1 1 0 0 00:05:38.128 asserts 5 5 5 0 n/a 00:05:38.128 00:05:38.128 Elapsed time = 0.000 seconds 00:05:38.128 06:18:50 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:05:38.128 00:05:38.128 00:05:38.128 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.128 http://cunit.sourceforge.net/ 00:05:38.128 00:05:38.128 00:05:38.128 Suite: nvme_tcp 00:05:38.128 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:05:38.128 Test: test_nvme_tcp_build_iovs ...passed 00:05:38.128 Test: test_nvme_tcp_build_sgl_request ...[2024-07-23 06:18:50.555382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820667488, and the iovcnt=16, remaining_size=28672 00:05:38.128 passed 00:05:38.128 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:05:38.128 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:05:38.128 Test: test_nvme_tcp_req_complete_safe ...passed 00:05:38.128 Test: test_nvme_tcp_req_get ...passed 00:05:38.128 Test: test_nvme_tcp_req_init ...passed 00:05:38.128 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:05:38.128 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:05:38.128 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-23 06:18:50.555808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(6) to be set 00:05:38.128 passed 00:05:38.128 Test: test_nvme_tcp_alloc_reqs ...passed 00:05:38.128 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:05:38.128 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-23 06:18:50.555920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.555965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8206687c8 00:05:38.128 [2024-07-23 06:18:50.555998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:05:38.128 [2024-07-23 06:18:50.556019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:05:38.128 [2024-07-23 06:18:50.556051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:05:38.128 [2024-07-23 06:18:50.556083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 passed 00:05:38.128 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-23 06:18:50.556169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556186] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:38.128 [2024-07-23 06:18:50.556251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:05:38.128 [2024-07-23 06:18:50.556269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:56.286 [2024-07-23 06:19:05.704722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:56.286 passed 00:05:56.286 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:05:56.286 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:05:56.286 Test: test_nvme_tcp_icresp_handle ...[2024-07-23 06:19:05.704847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820668c00): PDU Sequence Error 00:05:56.286 [2024-07-23 06:19:05.704874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:05:56.286 passed 00:05:56.286 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:05:56.286 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:05:56.286 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:05:56.286 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-23 06:19:05.704896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:05:56.286 [2024-07-23 06:19:05.704912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:56.286 [2024-07-23 06:19:05.704928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:05:56.286 [2024-07-23 06:19:05.704943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(5) to be set 00:05:56.286 [2024-07-23 06:19:05.704959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820669038 is same with the state(0) to be set 00:05:56.286 [2024-07-23 06:19:05.704979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820668c00): PDU Sequence Error 00:05:56.286 [2024-07-23 06:19:05.705012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820669038 00:05:56.286 [2024-07-23 06:19:05.705084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820666d98, errno=0, rc=0 00:05:56.286 [2024-07-23 06:19:05.705104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820666d98 is same with the state(5) to be set 00:05:56.286 [2024-07-23 06:19:05.705119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820666d98 is same with the state(5) to be set 00:05:56.286 [2024-07-23 06:19:05.705391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820666d98 (0): No error: 0 00:05:56.286 passed 00:05:56.286 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-23 06:19:05.705457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820666d98 (0): No error: 0 00:05:56.286 passed 00:05:56.286 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:05:56.286 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:05:56.286 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-23 06:19:05.774416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:56.286 [2024-07-23 06:19:05.774471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:56.286 [2024-07-23 06:19:05.774504] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:56.286 [2024-07-23 06:19:05.774513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:56.286 passed 00:05:56.286 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-23 06:19:05.774550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:56.286 [2024-07-23 06:19:05.774559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:56.286 [2024-07-23 06:19:05.774571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:05:56.286 [2024-07-23 06:19:05.774578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:56.286 [2024-07-23 06:19:05.774591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2dd98da6b000 with addr=192.168.1.78, port=23 00:05:56.286 [2024-07-23 06:19:05.774598] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:56.286 passed 00:05:56.286 00:05:56.286 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.286 suites 1 1 n/a 0 0 00:05:56.286 tests 27 27 27 0 0 00:05:56.286 asserts 624 624 624 0 n/a 00:05:56.286 00:05:56.286 Elapsed time = 0.070 seconds 00:05:56.286 [2024-07-23 06:19:05.774615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x2dd98da39180, and the iovcnt=1, remaining_size=1024 00:05:56.286 [2024-07-23 06:19:05.774623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:05:56.286 06:19:05 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:05:56.286 00:05:56.286 00:05:56.286 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.286 http://cunit.sourceforge.net/ 00:05:56.286 00:05:56.286 00:05:56.286 Suite: nvme_transport 00:05:56.286 Test: test_nvme_get_transport ...passed 00:05:56.286 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:05:56.286 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:05:56.286 Test: test_nvme_transport_poll_group_add_remove ...passed 00:05:56.286 Test: test_ctrlr_get_memory_domains ...passed 00:05:56.286 00:05:56.286 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.286 suites 1 1 n/a 0 0 00:05:56.286 tests 5 5 5 0 0 00:05:56.286 asserts 28 28 28 0 n/a 00:05:56.286 00:05:56.286 Elapsed time = 0.000 seconds 00:05:56.286 06:19:05 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:05:56.286 00:05:56.286 00:05:56.286 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.286 http://cunit.sourceforge.net/ 00:05:56.286 00:05:56.286 00:05:56.286 Suite: nvme_io_msg 00:05:56.286 Test: test_nvme_io_msg_send ...passed 00:05:56.286 Test: test_nvme_io_msg_process ...passed 00:05:56.286 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:05:56.286 00:05:56.286 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.286 suites 1 1 n/a 0 0 00:05:56.286 tests 3 3 3 0 0 00:05:56.286 asserts 56 56 56 0 n/a 00:05:56.286 00:05:56.286 Elapsed time = 0.000 seconds 00:05:56.286 06:19:05 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:05:56.286 00:05:56.286 00:05:56.286 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.286 http://cunit.sourceforge.net/ 00:05:56.287 00:05:56.287 00:05:56.287 Suite: nvme_pcie_common 00:05:56.287 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:05:56.287 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:05:56.287 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:05:56.287 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:05:56.287 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:05:56.287 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:05:56.287 00:05:56.287 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.287 suites 1 1 n/a 0 0 00:05:56.287 tests 6 6 6 0 0 00:05:56.287 asserts 148 148 148 0 n/a 00:05:56.287 00:05:56.287 Elapsed time = 0.000 seconds 00:05:56.287 [2024-07-23 06:19:05.791189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:05:56.287 [2024-07-23 06:19:05.791399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:05:56.287 [2024-07-23 06:19:05.791414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:05:56.287 [2024-07-23 06:19:05.791423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:05:56.287 [2024-07-23 06:19:05.791508] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:56.287 [2024-07-23 06:19:05.791516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:56.287 06:19:05 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:05:56.287 00:05:56.287 00:05:56.287 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.287 http://cunit.sourceforge.net/ 00:05:56.287 00:05:56.287 00:05:56.287 Suite: nvme_fabric 00:05:56.287 Test: test_nvme_fabric_prop_set_cmd ...passed 00:05:56.287 Test: test_nvme_fabric_prop_get_cmd ...passed 00:05:56.287 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:05:56.287 Test: test_nvme_fabric_discover_probe ...passed 00:05:56.287 Test: test_nvme_fabric_qpair_connect ...passed 00:05:56.287 00:05:56.287 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.287 suites 1 1 n/a 0 0 00:05:56.287 tests 5 5 5 0 0 00:05:56.287 asserts 60 60 60 0 n/a 00:05:56.287 00:05:56.287 Elapsed time = 0.000 seconds 00:05:56.287 [2024-07-23 06:19:05.795455] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:05:56.287 06:19:05 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:05:56.287 00:05:56.287 00:05:56.287 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.287 http://cunit.sourceforge.net/ 00:05:56.287 00:05:56.287 00:05:56.287 Suite: nvme_opal 00:05:56.287 Test: test_opal_nvme_security_recv_send_done ...passed 00:05:56.287 Test: test_opal_add_short_atom_header ...passed 00:05:56.287 00:05:56.287 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.287 suites 1 1 n/a 0 0 00:05:56.287 tests 2 2 2 0 0 00:05:56.287 asserts 22 22 22 0 n/a 00:05:56.287 00:05:56.287 Elapsed time = 0.000 seconds 00:05:56.287 [2024-07-23 06:19:05.798727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:05:56.287 00:05:56.287 real 0m15.509s 00:05:56.287 user 0m0.120s 00:05:56.287 sys 0m0.088s 00:05:56.287 ************************************ 00:05:56.287 END TEST unittest_nvme 00:05:56.287 ************************************ 00:05:56.287 06:19:05 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.287 06:19:05 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.287 06:19:05 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.287 06:19:05 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:56.287 06:19:05 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.287 06:19:05 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.287 06:19:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.287 ************************************ 00:05:56.287 START TEST unittest_log 00:05:56.287 ************************************ 00:05:56.287 06:19:05 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:56.287 00:05:56.287 00:05:56.287 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.287 http://cunit.sourceforge.net/ 00:05:56.287 00:05:56.287 00:05:56.287 Suite: log 00:05:56.287 Test: log_test ...[2024-07-23 06:19:05.845375] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:05:56.287 [2024-07-23 06:19:05.845610] log_ut.c: 57:log_test: *DEBUG*: log test 00:05:56.287 log dump test: 00:05:56.287 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:05:56.287 spdk dump test: 00:05:56.287 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:05:56.287 spdk dump test: 00:05:56.287 passed 00:05:56.287 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:05:56.287 00000010 65 20 63 68 61 72 73 e chars 00:05:56.287 passed 00:05:56.287 00:05:56.287 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.287 suites 1 1 n/a 0 0 00:05:56.287 tests 2 2 2 0 0 00:05:56.287 asserts 73 73 73 0 n/a 00:05:56.287 00:05:56.287 Elapsed time = 0.000 seconds 00:05:56.287 00:05:56.287 real 0m1.019s 00:05:56.287 user 0m0.010s 00:05:56.287 sys 0m0.004s 00:05:56.287 06:19:06 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.287 06:19:06 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:05:56.287 ************************************ 00:05:56.287 END TEST unittest_log 00:05:56.287 ************************************ 00:05:56.287 06:19:06 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.287 06:19:06 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:56.287 06:19:06 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.287 06:19:06 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.287 06:19:06 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.287 ************************************ 00:05:56.287 START TEST unittest_lvol 00:05:56.287 ************************************ 00:05:56.287 06:19:06 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:56.287 00:05:56.287 00:05:56.287 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.287 http://cunit.sourceforge.net/ 00:05:56.287 00:05:56.287 00:05:56.287 Suite: lvol 00:05:56.287 Test: lvs_init_unload_success ...[2024-07-23 06:19:06.902966] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:05:56.287 passed 00:05:56.287 Test: lvs_init_destroy_success ...passed 00:05:56.287 Test: lvs_init_opts_success ...passed 00:05:56.287 Test: lvs_unload_lvs_is_null_fail ...[2024-07-23 06:19:06.903313] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:05:56.287 [2024-07-23 06:19:06.903389] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:05:56.287 passed 00:05:56.287 Test: lvs_names ...[2024-07-23 06:19:06.903425] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:05:56.287 [2024-07-23 06:19:06.903451] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:05:56.287 [2024-07-23 06:19:06.903486] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:05:56.287 passed 00:05:56.287 Test: lvol_create_destroy_success ...passed 00:05:56.287 Test: lvol_create_fail ...[2024-07-23 06:19:06.903582] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:05:56.287 [2024-07-23 06:19:06.903623] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:05:56.287 passed 00:05:56.287 Test: lvol_destroy_fail ...passed 00:05:56.287 Test: lvol_close ...[2024-07-23 06:19:06.903686] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:05:56.287 [2024-07-23 06:19:06.903715] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:05:56.287 passed 00:05:56.287 Test: lvol_resize ...passed 00:05:56.287 Test: lvol_set_read_only ...passed 00:05:56.287 Test: test_lvs_load ...[2024-07-23 06:19:06.903739] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:05:56.287 passed 00:05:56.288 Test: lvols_load ...passed 00:05:56.288 Test: lvol_open ...[2024-07-23 06:19:06.903837] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:05:56.288 [2024-07-23 06:19:06.903881] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:05:56.288 [2024-07-23 06:19:06.903932] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:56.288 [2024-07-23 06:19:06.903983] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:56.288 passed 00:05:56.288 Test: lvol_snapshot ...passed 00:05:56.288 Test: lvol_snapshot_fail ...passed 00:05:56.288 Test: lvol_clone ...passed 00:05:56.288 Test: lvol_clone_fail ...[2024-07-23 06:19:06.904150] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:05:56.288 passed 00:05:56.288 Test: lvol_iter_clones ...passed 00:05:56.288 Test: lvol_refcnt ...[2024-07-23 06:19:06.904238] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:05:56.288 [2024-07-23 06:19:06.904322] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 7747a5c8-48bb-11ef-a06c-59ddad71024c because it is still open 00:05:56.288 passed 00:05:56.288 Test: lvol_names ...[2024-07-23 06:19:06.904369] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:56.288 [2024-07-23 06:19:06.904406] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:56.288 [2024-07-23 06:19:06.904439] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:05:56.288 passed 00:05:56.288 Test: lvol_create_thin_provisioned ...passed 00:05:56.288 Test: lvol_rename ...[2024-07-23 06:19:06.904517] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:56.288 [2024-07-23 06:19:06.904539] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:05:56.288 passed 00:05:56.288 Test: lvs_rename ...[2024-07-23 06:19:06.904575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:05:56.288 passed 00:05:56.288 Test: lvol_inflate ...[2024-07-23 06:19:06.904622] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:56.288 passed 00:05:56.288 Test: lvol_decouple_parent ...[2024-07-23 06:19:06.904688] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:56.288 passed 00:05:56.288 Test: lvol_get_xattr ...passed 00:05:56.288 Test: lvol_esnap_reload ...passed 00:05:56.288 Test: lvol_esnap_create_bad_args ...passed 00:05:56.288 Test: lvol_esnap_create_delete ...passed 00:05:56.288 Test: lvol_esnap_load_esnaps ...passed 00:05:56.288 Test: lvol_esnap_missing ...passed 00:05:56.288 Test: lvol_esnap_hotplug ... 00:05:56.288 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:05:56.288 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:05:56.288 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:05:56.288 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:05:56.288 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:05:56.288 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:05:56.288 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:05:56.288 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:05:56.288 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:05:56.288 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:05:56.288 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:05:56.288 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:05:56.288 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:05:56.288 passed 00:05:56.288 Test: lvol_get_by ...passed 00:05:56.288 Test: lvol_shallow_copy ...passed 00:05:56.288 Test: lvol_set_parent ...passed 00:05:56.288 Test: lvol_set_external_parent ...passed 00:05:56.288 00:05:56.288 [2024-07-23 06:19:06.904760] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:05:56.288 [2024-07-23 06:19:06.904780] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:56.288 [2024-07-23 06:19:06.904793] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:05:56.288 [2024-07-23 06:19:06.904815] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:56.288 [2024-07-23 06:19:06.904843] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:05:56.288 [2024-07-23 06:19:06.904906] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:05:56.288 [2024-07-23 06:19:06.904966] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:56.288 [2024-07-23 06:19:06.904984] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:56.288 [2024-07-23 06:19:06.905137] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7747c5a4-48bb-11ef-a06c-59ddad71024c: failed to create esnap bs_dev: error -12 00:05:56.288 [2024-07-23 06:19:06.905217] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7747c8b3-48bb-11ef-a06c-59ddad71024c: failed to create esnap bs_dev: error -12 00:05:56.288 [2024-07-23 06:19:06.905266] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7747cac3-48bb-11ef-a06c-59ddad71024c: failed to create esnap bs_dev: error -12 00:05:56.288 [2024-07-23 06:19:06.905575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:56.288 [2024-07-23 06:19:06.905596] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 7747d6d7-48bb-11ef-a06c-59ddad71024c shallow copy, ext_dev must not be NULL 00:05:56.288 [2024-07-23 06:19:06.905645] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:05:56.288 [2024-07-23 06:19:06.905663] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:05:56.288 [2024-07-23 06:19:06.905700] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:05:56.288 [2024-07-23 06:19:06.905720] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:05:56.288 [2024-07-23 06:19:06.905744] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:05:56.288 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.288 suites 1 1 n/a 0 0 00:05:56.288 tests 37 37 37 0 0 00:05:56.288 asserts 1505 1505 1505 0 n/a 00:05:56.288 00:05:56.288 Elapsed time = 0.000 seconds 00:05:56.288 00:05:56.288 real 0m0.011s 00:05:56.288 user 0m0.003s 00:05:56.288 sys 0m0.008s 00:05:56.288 06:19:06 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.288 06:19:06 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:56.288 ************************************ 00:05:56.288 END TEST unittest_lvol 00:05:56.288 ************************************ 00:05:56.288 06:19:06 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.288 06:19:06 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:56.288 06:19:06 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:56.288 06:19:06 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.288 06:19:06 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.288 06:19:06 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.288 ************************************ 00:05:56.288 START TEST unittest_nvme_rdma 00:05:56.288 ************************************ 00:05:56.288 06:19:06 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:56.288 00:05:56.288 00:05:56.288 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.288 http://cunit.sourceforge.net/ 00:05:56.288 00:05:56.288 00:05:56.288 Suite: nvme_rdma 00:05:56.288 Test: test_nvme_rdma_build_sgl_request ...[2024-07-23 06:19:06.961090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:05:56.288 [2024-07-23 06:19:06.961441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:56.288 passed 00:05:56.288 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:05:56.288 Test: test_nvme_rdma_build_contig_request ...passed 00:05:56.288 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:05:56.288 Test: test_nvme_rdma_create_reqs ...passed 00:05:56.288 Test: test_nvme_rdma_create_rsps ...[2024-07-23 06:19:06.961483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:05:56.288 [2024-07-23 06:19:06.961524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:56.289 [2024-07-23 06:19:06.961563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:05:56.289 [2024-07-23 06:19:06.961629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:05:56.289 passed 00:05:56.289 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-23 06:19:06.961670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:56.289 passed 00:05:56.289 Test: test_nvme_rdma_poller_create ...passed 00:05:56.289 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:05:56.289 Test: test_nvme_rdma_ctrlr_construct ...passed 00:05:56.289 Test: test_nvme_rdma_req_put_and_get ...passed 00:05:56.289 Test: test_nvme_rdma_req_init ...passed 00:05:56.289 Test: test_nvme_rdma_validate_cm_event ...[2024-07-23 06:19:06.961692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:56.289 [2024-07-23 06:19:06.961734] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:05:56.289 [2024-07-23 06:19:06.961827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:05:56.289 [2024-07-23 06:19:06.961855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:05:56.289 passed 00:05:56.289 Test: test_nvme_rdma_qpair_init ...passed 00:05:56.289 Test: test_nvme_rdma_qpair_submit_request ...passed 00:05:56.289 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:05:56.289 Test: test_rdma_get_memory_translation ...passed 00:05:56.289 Test: test_get_rdma_qpair_from_wc ...passed 00:05:56.289 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:05:56.289 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:05:56.289 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-23 06:19:06.961909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:05:56.289 [2024-07-23 06:19:06.961931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:05:56.289 [2024-07-23 06:19:06.961967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:56.289 [2024-07-23 06:19:06.961987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:56.289 [2024-07-23 06:19:06.962036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:05:56.289 [2024-07-23 06:19:06.962075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:05:56.289 [2024-07-23 06:19:06.962097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820606d88 on poll group 0xe87be872000 00:05:56.289 [2024-07-23 06:19:06.962115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:05:56.289 [2024-07-23 06:19:06.962126] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:05:56.289 [2024-07-23 06:19:06.962147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820606d88 on poll group 0xe87be872000 00:05:56.289 passed 00:05:56.289 00:05:56.289 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.289 suites 1 1 n/a 0 0 00:05:56.289 tests 21 21 21 0 0 00:05:56.289 asserts 397 397 397 0 n/a 00:05:56.289 00:05:56.289 Elapsed time = 0.000 seconds 00:05:56.289 [2024-07-23 06:19:06.962262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:05:56.289 00:05:56.289 real 0m0.010s 00:05:56.289 user 0m0.008s 00:05:56.289 sys 0m0.000s 00:05:56.289 06:19:06 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.289 ************************************ 00:05:56.289 END TEST unittest_nvme_rdma 00:05:56.289 06:19:06 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 ************************************ 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.289 06:19:07 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 ************************************ 00:05:56.289 START TEST unittest_nvmf_transport 00:05:56.289 ************************************ 00:05:56.289 06:19:07 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:56.289 00:05:56.289 00:05:56.289 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.289 http://cunit.sourceforge.net/ 00:05:56.289 00:05:56.289 00:05:56.289 Suite: nvmf 00:05:56.289 Test: test_spdk_nvmf_transport_create ...[2024-07-23 06:19:07.017147] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:05:56.289 [2024-07-23 06:19:07.017507] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:05:56.289 [2024-07-23 06:19:07.017536] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:05:56.289 passed 00:05:56.289 Test: test_nvmf_transport_poll_group_create ...passed 00:05:56.289 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-23 06:19:07.017589] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:05:56.289 [2024-07-23 06:19:07.017638] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:05:56.289 [2024-07-23 06:19:07.017659] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:05:56.289 [2024-07-23 06:19:07.017689] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:05:56.289 passed 00:05:56.289 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:05:56.289 00:05:56.289 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.289 suites 1 1 n/a 0 0 00:05:56.289 tests 4 4 4 0 0 00:05:56.289 asserts 49 49 49 0 n/a 00:05:56.289 00:05:56.289 Elapsed time = 0.000 seconds 00:05:56.289 00:05:56.289 real 0m0.008s 00:05:56.289 user 0m0.000s 00:05:56.289 sys 0m0.008s 00:05:56.289 06:19:07 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.289 06:19:07 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 ************************************ 00:05:56.289 END TEST unittest_nvmf_transport 00:05:56.289 ************************************ 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.289 06:19:07 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 ************************************ 00:05:56.289 START TEST unittest_rdma 00:05:56.289 ************************************ 00:05:56.289 06:19:07 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:56.289 00:05:56.289 00:05:56.289 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.289 http://cunit.sourceforge.net/ 00:05:56.289 00:05:56.289 00:05:56.289 Suite: rdma_common 00:05:56.289 Test: test_spdk_rdma_pd ...[2024-07-23 06:19:07.063391] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:05:56.289 [2024-07-23 06:19:07.063647] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:05:56.289 passed 00:05:56.289 00:05:56.289 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.289 suites 1 1 n/a 0 0 00:05:56.289 tests 1 1 1 0 0 00:05:56.289 asserts 31 31 31 0 n/a 00:05:56.289 00:05:56.289 Elapsed time = 0.000 seconds 00:05:56.289 00:05:56.289 real 0m0.005s 00:05:56.289 user 0m0.005s 00:05:56.289 sys 0m0.000s 00:05:56.289 ************************************ 00:05:56.289 END TEST unittest_rdma 00:05:56.289 ************************************ 00:05:56.289 06:19:07 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.289 06:19:07 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.290 06:19:07 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:56.290 06:19:07 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:05:56.290 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.290 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.290 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.290 ************************************ 00:05:56.290 START TEST unittest_nvmf 00:05:56.290 ************************************ 00:05:56.290 06:19:07 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:05:56.290 06:19:07 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:05:56.290 00:05:56.290 00:05:56.290 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.290 http://cunit.sourceforge.net/ 00:05:56.290 00:05:56.290 00:05:56.290 Suite: nvmf 00:05:56.290 Test: test_get_log_page ...passed 00:05:56.290 Test: test_process_fabrics_cmd ...passed 00:05:56.290 Test: test_connect ...[2024-07-23 06:19:07.114097] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:05:56.290 [2024-07-23 06:19:07.114321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4742:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:05:56.290 [2024-07-23 06:19:07.114401] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:05:56.290 [2024-07-23 06:19:07.114417] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:05:56.290 [2024-07-23 06:19:07.114435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:05:56.290 [2024-07-23 06:19:07.114447] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:05:56.290 [2024-07-23 06:19:07.114459] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:05:56.290 [2024-07-23 06:19:07.114471] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:05:56.290 [2024-07-23 06:19:07.114482] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:05:56.290 [2024-07-23 06:19:07.114494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:05:56.290 [2024-07-23 06:19:07.114519] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:05:56.290 [2024-07-23 06:19:07.114536] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:05:56.290 passed 00:05:56.290 Test: test_get_ns_id_desc_list ...passed 00:05:56.290 Test: test_identify_ns ...passed 00:05:56.290 Test: test_identify_ns_iocs_specific ...passed 00:05:56.290 Test: test_reservation_write_exclusive ...passed 00:05:56.290 Test: test_reservation_exclusive_access ...[2024-07-23 06:19:07.114559] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:05:56.290 [2024-07-23 06:19:07.114572] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:05:56.290 [2024-07-23 06:19:07.114585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:05:56.290 [2024-07-23 06:19:07.114598] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:05:56.290 [2024-07-23 06:19:07.114628] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:05:56.290 [2024-07-23 06:19:07.114646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:05:56.290 [2024-07-23 06:19:07.114659] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:05:56.290 [2024-07-23 06:19:07.114707] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.290 [2024-07-23 06:19:07.114767] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:05:56.290 [2024-07-23 06:19:07.114793] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:05:56.290 [2024-07-23 06:19:07.114821] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.290 [2024-07-23 06:19:07.114874] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.290 passed 00:05:56.290 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:05:56.290 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:05:56.290 Test: test_reservation_notification_log_page ...passed 00:05:56.290 Test: test_get_dif_ctx ...passed 00:05:56.290 Test: test_set_get_features ...passed 00:05:56.290 Test: test_identify_ctrlr ...passed 00:05:56.290 Test: test_identify_ctrlr_iocs_specific ...passed 00:05:56.290 Test: test_custom_admin_cmd ...passed 00:05:56.290 Test: test_fused_compare_and_write ...[2024-07-23 06:19:07.114971] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:56.290 [2024-07-23 06:19:07.114990] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:56.290 [2024-07-23 06:19:07.115000] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:05:56.290 [2024-07-23 06:19:07.115010] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:05:56.290 [2024-07-23 06:19:07.115104] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:05:56.290 passed 00:05:56.290 Test: test_multi_async_event_reqs ...passed 00:05:56.290 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:05:56.290 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:05:56.290 Test: test_multi_async_events ...passed 00:05:56.290 Test: test_rae ...passed 00:05:56.290 Test: test_nvmf_ctrlr_create_destruct ...passed 00:05:56.290 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:05:56.290 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:05:56.290 Test: test_zcopy_read ...passed 00:05:56.290 Test: test_zcopy_write ...passed 00:05:56.290 Test: test_nvmf_property_set ...passed 00:05:56.290 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:05:56.290 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-23 06:19:07.115116] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:56.290 [2024-07-23 06:19:07.115128] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:56.290 [2024-07-23 06:19:07.115204] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4742:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:05:56.290 [2024-07-23 06:19:07.115219] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:05:56.290 [2024-07-23 06:19:07.115255] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:56.290 [2024-07-23 06:19:07.115267] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:56.290 [2024-07-23 06:19:07.115280] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:05:56.290 [2024-07-23 06:19:07.115296] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:05:56.290 passed 00:05:56.290 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:05:56.290 Test: test_nvmf_check_qpair_active ...passed 00:05:56.290 00:05:56.290 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.290 suites 1 1 n/a 0 0 00:05:56.290 tests 32 32 32 0 0 00:05:56.290 asserts 983 983 983 0 n/a 00:05:56.290 00:05:56.290 Elapsed time = 0.000 seconds 00:05:56.290 [2024-07-23 06:19:07.115307] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:05:56.290 [2024-07-23 06:19:07.115318] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:05:56.290 [2024-07-23 06:19:07.115344] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4742:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:05:56.290 [2024-07-23 06:19:07.115356] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:05:56.290 [2024-07-23 06:19:07.115366] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:05:56.290 [2024-07-23 06:19:07.115377] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:05:56.291 [2024-07-23 06:19:07.115387] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:05:56.291 06:19:07 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:05:56.291 00:05:56.291 00:05:56.291 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.291 http://cunit.sourceforge.net/ 00:05:56.291 00:05:56.291 00:05:56.291 Suite: nvmf 00:05:56.291 Test: test_get_rw_params ...passed 00:05:56.291 Test: test_get_rw_ext_params ...passed 00:05:56.291 Test: test_lba_in_range ...passed 00:05:56.291 Test: test_get_dif_ctx ...passed 00:05:56.291 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:05:56.291 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-23 06:19:07.123266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:05:56.291 [2024-07-23 06:19:07.124062] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:05:56.291 [2024-07-23 06:19:07.124089] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:05:56.291 [2024-07-23 06:19:07.124102] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:05:56.291 passed 00:05:56.291 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:05:56.291 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-23 06:19:07.124125] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:05:56.291 [2024-07-23 06:19:07.124134] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:05:56.291 [2024-07-23 06:19:07.124142] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:05:56.291 [2024-07-23 06:19:07.124150] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:05:56.291 passed 00:05:56.291 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:05:56.291 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:05:56.291 00:05:56.291 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.291 suites 1 1 n/a 0 0 00:05:56.291 tests 10 10 10 0 0 00:05:56.291 asserts 159 159 159 0 n/a 00:05:56.291 00:05:56.291 Elapsed time = 0.000 seconds 00:05:56.291 [2024-07-23 06:19:07.124157] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:05:56.291 06:19:07 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:05:56.291 00:05:56.291 00:05:56.291 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.291 http://cunit.sourceforge.net/ 00:05:56.291 00:05:56.291 00:05:56.291 Suite: nvmf 00:05:56.291 Test: test_discovery_log ...passed 00:05:56.291 Test: test_discovery_log_with_filters ...passed 00:05:56.291 00:05:56.291 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.291 suites 1 1 n/a 0 0 00:05:56.291 tests 2 2 2 0 0 00:05:56.291 asserts 238 238 238 0 n/a 00:05:56.291 00:05:56.291 Elapsed time = 0.000 seconds 00:05:56.291 06:19:07 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:05:56.291 00:05:56.291 00:05:56.291 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.291 http://cunit.sourceforge.net/ 00:05:56.291 00:05:56.291 00:05:56.291 Suite: nvmf 00:05:56.291 Test: nvmf_test_create_subsystem ...[2024-07-23 06:19:07.135264] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:05:56.291 [2024-07-23 06:19:07.135427] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:05:56.291 [2024-07-23 06:19:07.135454] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:05:56.291 [2024-07-23 06:19:07.135464] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:05:56.291 [2024-07-23 06:19:07.135473] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:05:56.291 [2024-07-23 06:19:07.135482] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:05:56.291 [2024-07-23 06:19:07.135491] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:05:56.291 [2024-07-23 06:19:07.135499] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:05:56.291 [2024-07-23 06:19:07.135508] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:05:56.291 [2024-07-23 06:19:07.135517] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:05:56.291 [2024-07-23 06:19:07.135525] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:05:56.291 [2024-07-23 06:19:07.135533] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:05:56.291 [2024-07-23 06:19:07.135547] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:05:56.291 [2024-07-23 06:19:07.135556] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:05:56.291 [2024-07-23 06:19:07.135578] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:05:56.291 [2024-07-23 06:19:07.135593] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:05:56.291 [2024-07-23 06:19:07.135605] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:05:56.291 [2024-07-23 06:19:07.135613] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:05:56.292 [2024-07-23 06:19:07.135622] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:56.292 [2024-07-23 06:19:07.135631] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:05:56.292 [2024-07-23 06:19:07.135640] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:56.292 passed 00:05:56.292 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-23 06:19:07.135648] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:05:56.292 [2024-07-23 06:19:07.135707] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:05:56.292 passed 00:05:56.292 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-23 06:19:07.135722] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:05:56.292 [2024-07-23 06:19:07.135744] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2162:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:05:56.292 passed 00:05:56.292 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:05:56.292 Test: test_spdk_nvmf_ns_visible ...[2024-07-23 06:19:07.135766] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:05:56.292 passed 00:05:56.292 Test: test_reservation_register ...[2024-07-23 06:19:07.135825] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 [2024-07-23 06:19:07.135840] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:05:56.292 passed 00:05:56.292 Test: test_reservation_register_with_ptpl ...passed 00:05:56.292 Test: test_reservation_acquire_preempt_1 ...[2024-07-23 06:19:07.135992] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 passed 00:05:56.292 Test: test_reservation_acquire_release_with_ptpl ...passed 00:05:56.292 Test: test_reservation_release ...[2024-07-23 06:19:07.136130] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 passed 00:05:56.292 Test: test_reservation_unregister_notification ...[2024-07-23 06:19:07.136151] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 passed 00:05:56.292 Test: test_reservation_release_notification ...passed 00:05:56.292 Test: test_reservation_release_notification_write_exclusive ...[2024-07-23 06:19:07.136167] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 [2024-07-23 06:19:07.136182] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 passed 00:05:56.292 Test: test_reservation_clear_notification ...passed 00:05:56.292 Test: test_reservation_preempt_notification ...[2024-07-23 06:19:07.136201] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 [2024-07-23 06:19:07.136217] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:56.292 passed 00:05:56.292 Test: test_spdk_nvmf_ns_event ...passed 00:05:56.292 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:05:56.292 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:05:56.292 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-23 06:19:07.136283] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:05:56.292 [2024-07-23 06:19:07.136298] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_ns_reservation_report ...[2024-07-23 06:19:07.136314] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3470:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_nqn_is_valid ...[2024-07-23 06:19:07.136336] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:05:56.292 [2024-07-23 06:19:07.136345] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:776b0cd9-48bb-11ef-a06c-59ddad71024": uuid is not the correct length 00:05:56.292 [2024-07-23 06:19:07.136355] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_ns_reservation_restore ...[2024-07-23 06:19:07.136379] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_subsystem_state_change ...passed 00:05:56.292 Test: test_nvmf_reservation_custom_ops ...passed 00:05:56.292 00:05:56.292 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.292 suites 1 1 n/a 0 0 00:05:56.292 tests 24 24 24 0 0 00:05:56.292 asserts 499 499 499 0 n/a 00:05:56.292 00:05:56.292 Elapsed time = 0.000 seconds 00:05:56.292 06:19:07 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:05:56.292 00:05:56.292 00:05:56.292 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.292 http://cunit.sourceforge.net/ 00:05:56.292 00:05:56.292 00:05:56.292 Suite: nvmf 00:05:56.292 Test: test_nvmf_tcp_create ...[2024-07-23 06:19:07.145278] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_tcp_destroy ...passed 00:05:56.292 Test: test_nvmf_tcp_poll_group_create ...passed 00:05:56.292 Test: test_nvmf_tcp_send_c2h_data ...passed 00:05:56.292 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:05:56.292 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:05:56.292 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:05:56.292 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-23 06:19:07.156271] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.292 [2024-07-23 06:19:07.156305] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156320] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156330] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.292 [2024-07-23 06:19:07.156338] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:05:56.292 Test: test_nvmf_tcp_icreq_handle ...[2024-07-23 06:19:07.156372] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:56.292 [2024-07-23 06:19:07.156382] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.292 [2024-07-23 06:19:07.156391] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d7d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156399] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:56.292 passed 00:05:56.292 Test: test_nvmf_tcp_check_xfer_type ...passed 00:05:56.292 Test: test_nvmf_tcp_invalid_sgl ...passed 00:05:56.292 Test: test_nvmf_tcp_pdu_ch_handle ...passed 00:05:56.292 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-23 06:19:07.156407] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d7d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156416] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.292 [2024-07-23 06:19:07.156424] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d7d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:05:56.292 [2024-07-23 06:19:07.156440] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d7d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156457] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2564:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:05:56.292 [2024-07-23 06:19:07.156466] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.292 [2024-07-23 06:19:07.156474] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d7d8 is same with the state(5) to be set 00:05:56.292 [2024-07-23 06:19:07.156484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820c7d068 00:05:56.293 [2024-07-23 06:19:07.156493] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156501] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156510] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2354:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820c7d8d8 00:05:56.293 [2024-07-23 06:19:07.156518] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156526] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156535] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:05:56.293 [2024-07-23 06:19:07.156543] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156551] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156559] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:05:56.293 [2024-07-23 06:19:07.156568] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156575] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156584] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156592] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156600] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156616] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156630] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156639] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156660] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156684] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156699] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 [2024-07-23 06:19:07.156708] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:56.293 [2024-07-23 06:19:07.156716] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7d8d8 is same with the state(5) to be set 00:05:56.293 passed 00:05:56.293 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-23 06:19:07.161672] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:05:56.293 passed 00:05:56.293 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-23 06:19:07.161699] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:05:56.293 [2024-07-23 06:19:07.161847] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:05:56.293 [2024-07-23 06:19:07.161863] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:05:56.293 passed 00:05:56.293 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:05:56.293 00:05:56.293 [2024-07-23 06:19:07.162340] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:05:56.293 [2024-07-23 06:19:07.162359] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:05:56.293 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.293 suites 1 1 n/a 0 0 00:05:56.293 tests 17 17 17 0 0 00:05:56.293 asserts 222 222 222 0 n/a 00:05:56.293 00:05:56.293 Elapsed time = 0.008 seconds 00:05:56.293 06:19:07 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:05:56.293 00:05:56.293 00:05:56.293 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.293 http://cunit.sourceforge.net/ 00:05:56.293 00:05:56.293 00:05:56.293 Suite: nvmf 00:05:56.293 Test: test_nvmf_tgt_create_poll_group ...passed 00:05:56.293 00:05:56.293 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.293 suites 1 1 n/a 0 0 00:05:56.293 tests 1 1 1 0 0 00:05:56.293 asserts 17 17 17 0 n/a 00:05:56.293 00:05:56.293 Elapsed time = 0.000 seconds 00:05:56.293 00:05:56.293 real 0m0.066s 00:05:56.293 user 0m0.028s 00:05:56.293 sys 0m0.040s 00:05:56.293 06:19:07 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.293 ************************************ 00:05:56.293 END TEST unittest_nvmf 00:05:56.293 ************************************ 00:05:56.293 06:19:07 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:05:56.293 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.293 06:19:07 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:56.293 06:19:07 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:56.293 06:19:07 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:56.293 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.293 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.293 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.293 ************************************ 00:05:56.293 START TEST unittest_nvmf_rdma 00:05:56.293 ************************************ 00:05:56.293 06:19:07 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:56.293 00:05:56.293 00:05:56.293 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.293 http://cunit.sourceforge.net/ 00:05:56.293 00:05:56.293 00:05:56.293 Suite: nvmf 00:05:56.293 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-23 06:19:07.225877] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:05:56.293 passed 00:05:56.293 Test: test_spdk_nvmf_rdma_request_process ...[2024-07-23 06:19:07.226297] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:05:56.293 [2024-07-23 06:19:07.226570] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:05:56.293 passed 00:05:56.293 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:05:56.293 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:05:56.293 Test: test_nvmf_rdma_opts_init ...passed 00:05:56.293 Test: test_nvmf_rdma_request_free_data ...passed 00:05:56.293 Test: test_nvmf_rdma_resources_create ...passed 00:05:56.293 Test: test_nvmf_rdma_qpair_compare ...passed 00:05:56.293 Test: test_nvmf_rdma_resize_cq ...[2024-07-23 06:19:07.227588] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:05:56.293 Using CQ of insufficient size may lead to CQ overrun 00:05:56.293 [2024-07-23 06:19:07.227618] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:05:56.293 passed 00:05:56.293 00:05:56.293 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.293 suites 1 1 n/a 0 0 00:05:56.293 tests 9 9 9 0 0 00:05:56.293 asserts 579 579 579 0 n/a 00:05:56.293 00:05:56.293 Elapsed time = 0.000 seconds 00:05:56.293 [2024-07-23 06:19:07.227673] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:05:56.293 00:05:56.293 real 0m0.008s 00:05:56.293 user 0m0.003s 00:05:56.293 sys 0m0.006s 00:05:56.293 06:19:07 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.293 06:19:07 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:56.293 ************************************ 00:05:56.293 END TEST unittest_nvmf_rdma 00:05:56.293 ************************************ 00:05:56.293 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.294 06:19:07 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:56.294 06:19:07 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:05:56.294 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.294 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.294 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.294 ************************************ 00:05:56.294 START TEST unittest_scsi 00:05:56.294 ************************************ 00:05:56.294 06:19:07 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:05:56.294 06:19:07 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:05:56.294 00:05:56.294 00:05:56.294 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.294 http://cunit.sourceforge.net/ 00:05:56.294 00:05:56.294 00:05:56.294 Suite: dev_suite 00:05:56.294 Test: dev_destruct_null_dev ...passed 00:05:56.294 Test: dev_destruct_zero_luns ...passed 00:05:56.294 Test: dev_destruct_null_lun ...passed 00:05:56.294 Test: dev_destruct_success ...passed 00:05:56.294 Test: dev_construct_num_luns_zero ...[2024-07-23 06:19:07.274541] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:05:56.294 passed 00:05:56.294 Test: dev_construct_no_lun_zero ...passed[2024-07-23 06:19:07.274743] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:05:56.294 00:05:56.294 Test: dev_construct_null_lun ...passed 00:05:56.294 Test: dev_construct_name_too_long ...passed 00:05:56.294 Test: dev_construct_success ...passed 00:05:56.294 Test: dev_construct_success_lun_zero_not_first ...passed 00:05:56.294 Test: dev_queue_mgmt_task_success ...passed 00:05:56.294 Test: dev_queue_task_success ...passed 00:05:56.294 Test: dev_stop_success ...passed 00:05:56.294 Test: dev_add_port_max_ports ...passed 00:05:56.294 Test: dev_add_port_construct_failure1 ...passed 00:05:56.294 Test: dev_add_port_construct_failure2 ...[2024-07-23 06:19:07.274764] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:05:56.294 [2024-07-23 06:19:07.274779] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:05:56.294 [2024-07-23 06:19:07.274826] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:05:56.294 [2024-07-23 06:19:07.274857] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:05:56.294 [2024-07-23 06:19:07.274871] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:05:56.294 passed 00:05:56.294 Test: dev_add_port_success1 ...passed 00:05:56.294 Test: dev_add_port_success2 ...passed 00:05:56.294 Test: dev_add_port_success3 ...passed 00:05:56.294 Test: dev_find_port_by_id_num_ports_zero ...passed 00:05:56.294 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:05:56.294 Test: dev_find_port_by_id_success ...passed 00:05:56.294 Test: dev_add_lun_bdev_not_found ...passed 00:05:56.294 Test: dev_add_lun_no_free_lun_id ...passed 00:05:56.294 Test: dev_add_lun_success1 ...passed 00:05:56.294 Test: dev_add_lun_success2 ...passed 00:05:56.294 Test: dev_check_pending_tasks ...passed 00:05:56.294 Test: dev_iterate_luns ...passed 00:05:56.294 Test: dev_find_free_lun ...[2024-07-23 06:19:07.275102] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:05:56.294 passed 00:05:56.294 00:05:56.294 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.294 suites 1 1 n/a 0 0 00:05:56.294 tests 29 29 29 0 0 00:05:56.294 asserts 97 97 97 0 n/a 00:05:56.294 00:05:56.294 Elapsed time = 0.000 seconds 00:05:56.294 06:19:07 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:05:56.294 00:05:56.294 00:05:56.294 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.294 http://cunit.sourceforge.net/ 00:05:56.294 00:05:56.294 00:05:56.294 Suite: lun_suite 00:05:56.294 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:05:56.294 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:05:56.294 Test: lun_task_mgmt_execute_lun_reset ...passed 00:05:56.294 Test: lun_task_mgmt_execute_target_reset ...passed 00:05:56.294 Test: lun_task_mgmt_execute_invalid_case ...passed 00:05:56.294 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-23 06:19:07.280546] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:05:56.294 [2024-07-23 06:19:07.280731] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:05:56.294 [2024-07-23 06:19:07.280752] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:05:56.294 passed 00:05:56.294 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:05:56.294 Test: lun_append_task_null_lun_not_supported ...passed 00:05:56.294 Test: lun_execute_scsi_task_pending ...passed 00:05:56.294 Test: lun_execute_scsi_task_complete ...passed 00:05:56.294 Test: lun_execute_scsi_task_resize ...passed 00:05:56.294 Test: lun_destruct_success ...passed 00:05:56.294 Test: lun_construct_null_ctx ...passed 00:05:56.294 Test: lun_construct_success ...passed 00:05:56.294 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:05:56.294 Test: lun_reset_task_suspend_scsi_task ...passed 00:05:56.294 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:05:56.294 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:05:56.294 00:05:56.294 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.294 suites 1 1 n/a 0 0 00:05:56.294 tests 18 18 18 0 0 00:05:56.294 asserts 153 153 153 0 n/a 00:05:56.294 00:05:56.294 Elapsed time = 0.000 seconds 00:05:56.294 [2024-07-23 06:19:07.280808] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:05:56.294 06:19:07 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:05:56.294 00:05:56.294 00:05:56.294 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.294 http://cunit.sourceforge.net/ 00:05:56.294 00:05:56.294 00:05:56.294 Suite: scsi_suite 00:05:56.294 Test: scsi_init ...passed 00:05:56.294 00:05:56.294 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.294 suites 1 1 n/a 0 0 00:05:56.294 tests 1 1 1 0 0 00:05:56.294 asserts 1 1 1 0 n/a 00:05:56.294 00:05:56.294 Elapsed time = 0.000 seconds 00:05:56.294 06:19:07 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:05:56.294 00:05:56.294 00:05:56.294 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.294 http://cunit.sourceforge.net/ 00:05:56.294 00:05:56.294 00:05:56.294 Suite: translation_suite 00:05:56.294 Test: mode_select_6_test ...passed 00:05:56.294 Test: mode_select_6_test2 ...passed 00:05:56.294 Test: mode_sense_6_test ...passed 00:05:56.294 Test: mode_sense_10_test ...passed 00:05:56.294 Test: inquiry_evpd_test ...passed 00:05:56.294 Test: inquiry_standard_test ...passed 00:05:56.294 Test: inquiry_overflow_test ...passed 00:05:56.294 Test: task_complete_test ...passed 00:05:56.294 Test: lba_range_test ...passed 00:05:56.294 Test: xfer_len_test ...[2024-07-23 06:19:07.291644] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:05:56.294 passed 00:05:56.294 Test: xfer_test ...passed 00:05:56.294 Test: scsi_name_padding_test ...passed 00:05:56.294 Test: get_dif_ctx_test ...passed 00:05:56.294 Test: unmap_split_test ...passed 00:05:56.294 00:05:56.294 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.294 suites 1 1 n/a 0 0 00:05:56.294 tests 14 14 14 0 0 00:05:56.294 asserts 1205 1205 1205 0 n/a 00:05:56.294 00:05:56.294 Elapsed time = 0.000 seconds 00:05:56.294 06:19:07 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:05:56.294 00:05:56.294 00:05:56.294 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.294 http://cunit.sourceforge.net/ 00:05:56.294 00:05:56.294 00:05:56.294 Suite: reservation_suite 00:05:56.294 Test: test_reservation_register ...[2024-07-23 06:19:07.297810] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.294 passed 00:05:56.295 Test: test_reservation_reserve ...passed 00:05:56.295 Test: test_all_registrant_reservation_reserve ...passed 00:05:56.295 Test: test_all_registrant_reservation_access ...passed 00:05:56.295 Test: test_reservation_preempt_non_all_regs ...passed 00:05:56.295 Test: test_reservation_preempt_all_regs ...passed 00:05:56.295 Test: test_reservation_cmds_conflict ...[2024-07-23 06:19:07.298304] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 [2024-07-23 06:19:07.298319] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:05:56.295 [2024-07-23 06:19:07.298329] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:05:56.295 [2024-07-23 06:19:07.298343] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 [2024-07-23 06:19:07.298359] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 [2024-07-23 06:19:07.298370] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:05:56.295 [2024-07-23 06:19:07.298379] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:05:56.295 [2024-07-23 06:19:07.298392] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 [2024-07-23 06:19:07.298401] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:05:56.295 [2024-07-23 06:19:07.298415] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 [2024-07-23 06:19:07.298437] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 [2024-07-23 06:19:07.298448] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:05:56.295 [2024-07-23 06:19:07.298457] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:56.295 [2024-07-23 06:19:07.298465] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:56.295 [2024-07-23 06:19:07.298473] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:56.295 passed 00:05:56.295 Test: test_scsi2_reserve_release ...passed 00:05:56.295 Test: test_pr_with_scsi2_reserve_release ...passed 00:05:56.295 00:05:56.295 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.295 suites 1 1 n/a 0 0 00:05:56.295 tests 9 9 9 0 0 00:05:56.295 asserts 344 344 344 0 n/a 00:05:56.295 00:05:56.295 Elapsed time = 0.000 seconds 00:05:56.295 [2024-07-23 06:19:07.298482] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:56.295 [2024-07-23 06:19:07.298501] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:56.295 00:05:56.295 real 0m0.029s 00:05:56.295 user 0m0.004s 00:05:56.295 sys 0m0.025s 00:05:56.295 06:19:07 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.295 06:19:07 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:05:56.295 ************************************ 00:05:56.295 END TEST unittest_scsi 00:05:56.295 ************************************ 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.295 06:19:07 unittest -- unit/unittest.sh@278 -- # uname -s 00:05:56.295 06:19:07 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:05:56.295 06:19:07 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.295 ************************************ 00:05:56.295 START TEST unittest_thread 00:05:56.295 ************************************ 00:05:56.295 06:19:07 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:56.295 00:05:56.295 00:05:56.295 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.295 http://cunit.sourceforge.net/ 00:05:56.295 00:05:56.295 00:05:56.295 Suite: io_channel 00:05:56.295 Test: thread_alloc ...passed 00:05:56.295 Test: thread_send_msg ...passed 00:05:56.295 Test: thread_poller ...passed 00:05:56.295 Test: poller_pause ...passed 00:05:56.295 Test: thread_for_each ...passed 00:05:56.295 Test: for_each_channel_remove ...passed 00:05:56.295 Test: for_each_channel_unreg ...[2024-07-23 06:19:07.346898] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x8211d6504 already registered (old:0x34c370067000 new:0x34c370067180) 00:05:56.295 passed 00:05:56.295 Test: thread_name ...passed 00:05:56.295 Test: channel ...[2024-07-23 06:19:07.347459] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x2287f8 00:05:56.295 passed 00:05:56.295 Test: channel_destroy_races ...passed 00:05:56.295 Test: thread_exit_test ...[2024-07-23 06:19:07.347938] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x34c37002ca80 got timeout, and move it to the exited state forcefully 00:05:56.295 passed 00:05:56.295 Test: thread_update_stats_test ...passed 00:05:56.295 Test: nested_channel ...passed 00:05:56.295 Test: device_unregister_and_thread_exit_race ...passed 00:05:56.295 Test: cache_closest_timed_poller ...passed 00:05:56.295 Test: multi_timed_pollers_have_same_expiration ...passed 00:05:56.295 Test: io_device_lookup ...passed 00:05:56.295 Test: spdk_spin ...[2024-07-23 06:19:07.349072] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:56.295 [2024-07-23 06:19:07.349115] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8211d6500 00:05:56.295 [2024-07-23 06:19:07.349132] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:56.295 [2024-07-23 06:19:07.349358] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:56.295 [2024-07-23 06:19:07.349379] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8211d6500 00:05:56.295 [2024-07-23 06:19:07.349395] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:56.295 [2024-07-23 06:19:07.349409] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8211d6500 00:05:56.295 passed 00:05:56.295 Test: for_each_channel_and_thread_exit_race ...[2024-07-23 06:19:07.349425] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:56.295 [2024-07-23 06:19:07.349440] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8211d6500 00:05:56.295 [2024-07-23 06:19:07.349455] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:05:56.295 [2024-07-23 06:19:07.349470] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8211d6500 00:05:56.295 passed 00:05:56.295 Test: for_each_thread_and_thread_exit_race ...passed 00:05:56.295 00:05:56.295 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.295 suites 1 1 n/a 0 0 00:05:56.295 tests 20 20 20 0 0 00:05:56.295 asserts 409 409 409 0 n/a 00:05:56.295 00:05:56.295 Elapsed time = 0.008 seconds 00:05:56.295 00:05:56.295 real 0m0.013s 00:05:56.295 user 0m0.011s 00:05:56.295 sys 0m0.005s 00:05:56.295 06:19:07 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.295 06:19:07 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.295 ************************************ 00:05:56.295 END TEST unittest_thread 00:05:56.295 ************************************ 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.295 06:19:07 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.295 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.295 ************************************ 00:05:56.295 START TEST unittest_iobuf 00:05:56.295 ************************************ 00:05:56.295 06:19:07 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:56.295 00:05:56.295 00:05:56.295 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.295 http://cunit.sourceforge.net/ 00:05:56.295 00:05:56.295 00:05:56.295 Suite: io_channel 00:05:56.295 Test: iobuf ...passed 00:05:56.296 Test: iobuf_cache ...[2024-07-23 06:19:07.392207] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:56.296 [2024-07-23 06:19:07.392391] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:56.296 [2024-07-23 06:19:07.392417] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:05:56.296 [2024-07-23 06:19:07.392426] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:56.296 [2024-07-23 06:19:07.392435] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:56.296 [2024-07-23 06:19:07.392442] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:56.296 passed 00:05:56.296 Test: iobuf_priority ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 3 3 3 0 0 00:05:56.296 asserts 131 131 131 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 00:05:56.296 real 0m0.005s 00:05:56.296 user 0m0.004s 00:05:56.296 sys 0m0.003s 00:05:56.296 06:19:07 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.296 06:19:07 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:05:56.296 ************************************ 00:05:56.296 END TEST unittest_iobuf 00:05:56.296 ************************************ 00:05:56.296 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.296 06:19:07 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:05:56.296 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.296 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.296 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.296 ************************************ 00:05:56.296 START TEST unittest_util 00:05:56.296 ************************************ 00:05:56.296 06:19:07 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:05:56.296 00:05:56.296 00:05:56.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.296 http://cunit.sourceforge.net/ 00:05:56.296 00:05:56.296 00:05:56.296 Suite: base64 00:05:56.296 Test: test_base64_get_encoded_strlen ...passed 00:05:56.296 Test: test_base64_get_decoded_len ...passed 00:05:56.296 Test: test_base64_encode ...passed 00:05:56.296 Test: test_base64_decode ...passed 00:05:56.296 Test: test_base64_urlsafe_encode ...passed 00:05:56.296 Test: test_base64_urlsafe_decode ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 6 6 6 0 0 00:05:56.296 asserts 112 112 112 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:05:56.296 00:05:56.296 00:05:56.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.296 http://cunit.sourceforge.net/ 00:05:56.296 00:05:56.296 00:05:56.296 Suite: bit_array 00:05:56.296 Test: test_1bit ...passed 00:05:56.296 Test: test_64bit ...passed 00:05:56.296 Test: test_find ...passed 00:05:56.296 Test: test_resize ...passed 00:05:56.296 Test: test_errors ...passed 00:05:56.296 Test: test_count ...passed 00:05:56.296 Test: test_mask_store_load ...passed 00:05:56.296 Test: test_mask_clear ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 8 8 8 0 0 00:05:56.296 asserts 5075 5075 5075 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:05:56.296 00:05:56.296 00:05:56.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.296 http://cunit.sourceforge.net/ 00:05:56.296 00:05:56.296 00:05:56.296 Suite: cpuset 00:05:56.296 Test: test_cpuset ...passed 00:05:56.296 Test: test_cpuset_parse ...[2024-07-23 06:19:07.440765] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:05:56.296 [2024-07-23 06:19:07.441042] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:05:56.296 [2024-07-23 06:19:07.441067] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:05:56.296 [2024-07-23 06:19:07.441085] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:05:56.296 [2024-07-23 06:19:07.441101] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:05:56.296 [2024-07-23 06:19:07.441116] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:05:56.296 [2024-07-23 06:19:07.441132] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:05:56.296 [2024-07-23 06:19:07.441147] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:05:56.296 passed 00:05:56.296 Test: test_cpuset_fmt ...passed 00:05:56.296 Test: test_cpuset_foreach ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 4 4 4 0 0 00:05:56.296 asserts 90 90 90 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:05:56.296 00:05:56.296 00:05:56.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.296 http://cunit.sourceforge.net/ 00:05:56.296 00:05:56.296 00:05:56.296 Suite: crc16 00:05:56.296 Test: test_crc16_t10dif ...passed 00:05:56.296 Test: test_crc16_t10dif_seed ...passed 00:05:56.296 Test: test_crc16_t10dif_copy ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 3 3 3 0 0 00:05:56.296 asserts 5 5 5 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:05:56.296 00:05:56.296 00:05:56.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.296 http://cunit.sourceforge.net/ 00:05:56.296 00:05:56.296 00:05:56.296 Suite: crc32_ieee 00:05:56.296 Test: test_crc32_ieee ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 1 1 1 0 0 00:05:56.296 asserts 1 1 1 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:05:56.296 00:05:56.296 00:05:56.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.296 http://cunit.sourceforge.net/ 00:05:56.296 00:05:56.296 00:05:56.296 Suite: crc32c 00:05:56.296 Test: test_crc32c ...passed 00:05:56.296 Test: test_crc32c_nvme ...passed 00:05:56.296 00:05:56.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.296 suites 1 1 n/a 0 0 00:05:56.296 tests 2 2 2 0 0 00:05:56.296 asserts 16 16 16 0 n/a 00:05:56.296 00:05:56.296 Elapsed time = 0.000 seconds 00:05:56.296 06:19:07 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:05:56.296 00:05:56.296 00:05:56.297 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.297 http://cunit.sourceforge.net/ 00:05:56.297 00:05:56.297 00:05:56.297 Suite: crc64 00:05:56.297 Test: test_crc64_nvme ...passed 00:05:56.297 00:05:56.297 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.297 suites 1 1 n/a 0 0 00:05:56.297 tests 1 1 1 0 0 00:05:56.297 asserts 4 4 4 0 n/a 00:05:56.297 00:05:56.297 Elapsed time = 0.000 seconds 00:05:56.297 06:19:07 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:05:56.297 00:05:56.297 00:05:56.297 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.297 http://cunit.sourceforge.net/ 00:05:56.297 00:05:56.297 00:05:56.297 Suite: string 00:05:56.297 Test: test_parse_ip_addr ...passed 00:05:56.297 Test: test_str_chomp ...passed 00:05:56.297 Test: test_parse_capacity ...passed 00:05:56.297 Test: test_sprintf_append_realloc ...passed 00:05:56.297 Test: test_strtol ...passed 00:05:56.297 Test: test_strtoll ...passed 00:05:56.297 Test: test_strarray ...passed 00:05:56.297 Test: test_strcpy_replace ...passed 00:05:56.297 00:05:56.297 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.297 suites 1 1 n/a 0 0 00:05:56.297 tests 8 8 8 0 0 00:05:56.297 asserts 161 161 161 0 n/a 00:05:56.297 00:05:56.297 Elapsed time = 0.000 seconds 00:05:56.297 06:19:07 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:05:56.297 00:05:56.297 00:05:56.297 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.297 http://cunit.sourceforge.net/ 00:05:56.297 00:05:56.297 00:05:56.297 Suite: dif 00:05:56.297 Test: dif_generate_and_verify_test ...[2024-07-23 06:19:07.473117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:56.297 [2024-07-23 06:19:07.473325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:56.297 [2024-07-23 06:19:07.473370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:56.297 [2024-07-23 06:19:07.473409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:56.297 [2024-07-23 06:19:07.473454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:56.297 passed 00:05:56.297 Test: dif_disable_check_test ...[2024-07-23 06:19:07.473493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:56.297 [2024-07-23 06:19:07.473643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:56.297 [2024-07-23 06:19:07.473683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:56.297 [2024-07-23 06:19:07.473721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:56.297 passed 00:05:56.297 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-23 06:19:07.473853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:05:56.297 [2024-07-23 06:19:07.473892] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:05:56.297 [2024-07-23 06:19:07.473941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:05:56.297 [2024-07-23 06:19:07.473980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:05:56.297 [2024-07-23 06:19:07.474018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:56.297 [2024-07-23 06:19:07.474056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:56.297 [2024-07-23 06:19:07.474095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:56.297 [2024-07-23 06:19:07.474134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:56.297 passed 00:05:56.297 Test: dif_apptag_mask_test ...passed 00:05:56.297 Test: dif_sec_8_md_8_error_test ...passed 00:05:56.297 Test: dif_sec_512_md_0_error_test ...passed 00:05:56.297 Test: dif_sec_512_md_16_error_test ...passed 00:05:56.297 Test: dif_sec_4096_md_0_8_error_test ...passed 00:05:56.297 Test: dif_sec_4100_md_128_error_test ...passed 00:05:56.297 Test: dif_guard_seed_test ...passed 00:05:56.297 Test: dif_guard_value_test ...[2024-07-23 06:19:07.474173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:56.297 [2024-07-23 06:19:07.474211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:56.297 [2024-07-23 06:19:07.474251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:56.297 [2024-07-23 06:19:07.474291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:56.297 [2024-07-23 06:19:07.474330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:56.297 [2024-07-23 06:19:07.474354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:05:56.297 [2024-07-23 06:19:07.474364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.297 [2024-07-23 06:19:07.474373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:05:56.297 [2024-07-23 06:19:07.474381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:05:56.297 [2024-07-23 06:19:07.474390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.297 [2024-07-23 06:19:07.474398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.297 [2024-07-23 06:19:07.474406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.297 [2024-07-23 06:19:07.474413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.297 [2024-07-23 06:19:07.474423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:05:56.297 [2024-07-23 06:19:07.474430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:05:56.297 passed 00:05:56.297 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:05:56.297 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:05:56.297 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:56.297 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:56.297 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:56.297 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:05:56.297 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:56.297 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:56.297 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:05:56.297 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:56.297 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:05:56.298 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:05:56.298 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:56.298 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:56.298 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:56.298 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:56.298 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:56.298 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:56.298 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-23 06:19:07.479946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd48, Actual=fd4c 00:05:56.298 [2024-07-23 06:19:07.480261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe25, Actual=fe21 00:05:56.298 [2024-07-23 06:19:07.480575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.480902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.481219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.298 [2024-07-23 06:19:07.481534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.298 [2024-07-23 06:19:07.481849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=8823 00:05:56.298 [2024-07-23 06:19:07.482117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe21, Actual=dc0b 00:05:56.298 [2024-07-23 06:19:07.482377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753e9, Actual=1ab753ed 00:05:56.298 [2024-07-23 06:19:07.482689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574664, Actual=38574660 00:05:56.298 [2024-07-23 06:19:07.483009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.483321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.483640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.298 [2024-07-23 06:19:07.483951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.298 [2024-07-23 06:19:07.484266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=2d8e01e 00:05:56.298 [2024-07-23 06:19:07.484526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574660, Actual=389ded4d 00:05:56.298 [2024-07-23 06:19:07.484794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.298 [2024-07-23 06:19:07.485107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a262, Actual=88010a2d4837a266 00:05:56.298 [2024-07-23 06:19:07.485419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.485731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.486042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4005e 00:05:56.298 [2024-07-23 06:19:07.486354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4005e 00:05:56.298 [2024-07-23 06:19:07.486678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.298 [2024-07-23 06:19:07.486939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a266, Actual=6619964f83c55a06 00:05:56.298 passed 00:05:56.298 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-23 06:19:07.487065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.298 [2024-07-23 06:19:07.487107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:05:56.298 [2024-07-23 06:19:07.487148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.487189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.487230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.298 [2024-07-23 06:19:07.487271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.298 [2024-07-23 06:19:07.487311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.298 [2024-07-23 06:19:07.487342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dc0b 00:05:56.298 [2024-07-23 06:19:07.487374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.298 [2024-07-23 06:19:07.487415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:05:56.298 [2024-07-23 06:19:07.487455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.487496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.487536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.298 [2024-07-23 06:19:07.487577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.298 [2024-07-23 06:19:07.487617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.298 [2024-07-23 06:19:07.487648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=389ded4d 00:05:56.298 [2024-07-23 06:19:07.487679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.298 [2024-07-23 06:19:07.487721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a262, Actual=88010a2d4837a266 00:05:56.298 [2024-07-23 06:19:07.487762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.487802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.487843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.298 [2024-07-23 06:19:07.487883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.298 passed 00:05:56.298 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-23 06:19:07.487924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.298 [2024-07-23 06:19:07.487954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6619964f83c55a06 00:05:56.298 [2024-07-23 06:19:07.487994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.298 [2024-07-23 06:19:07.488035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:05:56.298 [2024-07-23 06:19:07.488076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.488118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.488159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.298 [2024-07-23 06:19:07.488200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.298 [2024-07-23 06:19:07.488241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.298 [2024-07-23 06:19:07.488271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dc0b 00:05:56.298 [2024-07-23 06:19:07.488302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.298 [2024-07-23 06:19:07.488343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:05:56.298 [2024-07-23 06:19:07.488384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.298 [2024-07-23 06:19:07.488424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.488465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.488506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.488546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.299 [2024-07-23 06:19:07.488577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=389ded4d 00:05:56.299 [2024-07-23 06:19:07.488618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.299 [2024-07-23 06:19:07.488668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a262, Actual=88010a2d4837a266 00:05:56.299 [2024-07-23 06:19:07.488710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.488750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.488791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.299 [2024-07-23 06:19:07.488832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.299 [2024-07-23 06:19:07.488872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.299 [2024-07-23 06:19:07.488903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6619964f83c55a06 00:05:56.299 passed 00:05:56.299 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-23 06:19:07.488937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.299 [2024-07-23 06:19:07.488978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:05:56.299 [2024-07-23 06:19:07.489018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.489140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.489181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.299 [2024-07-23 06:19:07.489211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dc0b 00:05:56.299 [2024-07-23 06:19:07.489242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.299 [2024-07-23 06:19:07.489282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:05:56.299 [2024-07-23 06:19:07.489323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.489444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.489484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.299 [2024-07-23 06:19:07.489515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=389ded4d 00:05:56.299 [2024-07-23 06:19:07.489545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.299 [2024-07-23 06:19:07.489586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a262, Actual=88010a2d4837a266 00:05:56.299 [2024-07-23 06:19:07.489626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.299 [2024-07-23 06:19:07.489749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.299 [2024-07-23 06:19:07.489790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.299 passed 00:05:56.299 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-23 06:19:07.489820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6619964f83c55a06 00:05:56.299 [2024-07-23 06:19:07.489853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.299 [2024-07-23 06:19:07.489894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:05:56.299 [2024-07-23 06:19:07.489935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.489975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.490016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.490056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.490097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.299 passed 00:05:56.299 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-23 06:19:07.490128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dc0b 00:05:56.299 [2024-07-23 06:19:07.490161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.299 [2024-07-23 06:19:07.490212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:05:56.299 [2024-07-23 06:19:07.490252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.490293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.490333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.490374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.299 [2024-07-23 06:19:07.490414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.299 [2024-07-23 06:19:07.490444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=389ded4d 00:05:56.299 [2024-07-23 06:19:07.490475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.299 [2024-07-23 06:19:07.490517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a262, Actual=88010a2d4837a266 00:05:56.299 [2024-07-23 06:19:07.490557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.490598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.490638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.299 [2024-07-23 06:19:07.490680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.299 passed 00:05:56.299 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-23 06:19:07.490721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.299 [2024-07-23 06:19:07.490751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6619964f83c55a06 00:05:56.299 [2024-07-23 06:19:07.490784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.299 [2024-07-23 06:19:07.490825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:05:56.299 [2024-07-23 06:19:07.490865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.299 [2024-07-23 06:19:07.490905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.490946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.490986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.491027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.300 passed 00:05:56.300 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-23 06:19:07.491058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dc0b 00:05:56.300 [2024-07-23 06:19:07.491091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.300 [2024-07-23 06:19:07.491131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:05:56.300 [2024-07-23 06:19:07.491171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.491212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.491252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.491293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.491333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.300 [2024-07-23 06:19:07.491364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=389ded4d 00:05:56.300 [2024-07-23 06:19:07.491394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.300 [2024-07-23 06:19:07.491435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a262, Actual=88010a2d4837a266 00:05:56.300 [2024-07-23 06:19:07.491475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.491516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.491556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.300 [2024-07-23 06:19:07.491596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.300 passed 00:05:56.300 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-07-23 06:19:07.491637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.300 [2024-07-23 06:19:07.491667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6619964f83c55a06 00:05:56.300 passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:56.300 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:56.300 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:56.300 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-23 06:19:07.497267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd48, Actual=fd4c 00:05:56.300 [2024-07-23 06:19:07.497455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=d379, Actual=d37d 00:05:56.300 [2024-07-23 06:19:07.497632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.497809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.497985] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.300 [2024-07-23 06:19:07.498165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.300 [2024-07-23 06:19:07.498339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=8823 00:05:56.300 [2024-07-23 06:19:07.498513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=98a6 00:05:56.300 [2024-07-23 06:19:07.498689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753e9, Actual=1ab753ed 00:05:56.300 [2024-07-23 06:19:07.498864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=3c3173f6, Actual=3c3173f2 00:05:56.300 [2024-07-23 06:19:07.499040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.499215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.499391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.300 [2024-07-23 06:19:07.499569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.300 [2024-07-23 06:19:07.499745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=2d8e01e 00:05:56.300 [2024-07-23 06:19:07.499920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=c9dafff6 00:05:56.300 [2024-07-23 06:19:07.500095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.300 [2024-07-23 06:19:07.500280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=eb5f6f91001bc048, Actual=eb5f6f91001bc04c 00:05:56.300 [2024-07-23 06:19:07.500456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.500632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.500816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4005e 00:05:56.300 [2024-07-23 06:19:07.500993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4005e 00:05:56.300 [2024-07-23 06:19:07.501169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.300 [2024-07-23 06:19:07.501345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=c3012d0a84fb4707 00:05:56.300 passed 00:05:56.300 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-23 06:19:07.501398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.300 [2024-07-23 06:19:07.501442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=44e3, Actual=44e7 00:05:56.300 [2024-07-23 06:19:07.501484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.501527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.501569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.501612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.501654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.300 [2024-07-23 06:19:07.501696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=f3c 00:05:56.300 [2024-07-23 06:19:07.501739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.300 [2024-07-23 06:19:07.501782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1f876781, Actual=1f876785 00:05:56.300 [2024-07-23 06:19:07.501824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.501866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.300 [2024-07-23 06:19:07.501908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.501950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.300 [2024-07-23 06:19:07.501993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.300 [2024-07-23 06:19:07.502035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ea6ceb81 00:05:56.300 [2024-07-23 06:19:07.502086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.301 [2024-07-23 06:19:07.502129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20f4e219c84f8d, Actual=fe20f4e219c84f89 00:05:56.301 [2024-07-23 06:19:07.502172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.502214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.502257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.301 [2024-07-23 06:19:07.502306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.301 [2024-07-23 06:19:07.502349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.301 passed 00:05:56.301 Test: dix_sec_0_md_8_error ...passed 00:05:56.301 Test: dix_sec_512_md_0_error ...passed 00:05:56.301 Test: dix_sec_512_md_16_error ...passed 00:05:56.301 Test: dix_sec_4096_md_0_8_error ...[2024-07-23 06:19:07.502392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=d67eb6799d28c8c2 00:05:56.301 [2024-07-23 06:19:07.502402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:05:56.301 [2024-07-23 06:19:07.502413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.301 [2024-07-23 06:19:07.502422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:05:56.301 [2024-07-23 06:19:07.502430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:05:56.301 [2024-07-23 06:19:07.502440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.301 [2024-07-23 06:19:07.502447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.301 [2024-07-23 06:19:07.502455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.301 [2024-07-23 06:19:07.502462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:56.301 passed 00:05:56.301 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:05:56.301 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:56.301 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:56.301 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:56.301 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:56.301 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:56.301 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:56.301 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:56.301 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:56.301 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-23 06:19:07.507875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd48, Actual=fd4c 00:05:56.301 [2024-07-23 06:19:07.508061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=d379, Actual=d37d 00:05:56.301 [2024-07-23 06:19:07.508236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.508410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.508582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.301 [2024-07-23 06:19:07.508762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.301 [2024-07-23 06:19:07.508935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=8823 00:05:56.301 [2024-07-23 06:19:07.509107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=98a6 00:05:56.301 [2024-07-23 06:19:07.509278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753e9, Actual=1ab753ed 00:05:56.301 [2024-07-23 06:19:07.509449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=3c3173f6, Actual=3c3173f2 00:05:56.301 [2024-07-23 06:19:07.509619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.509790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.509963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.301 [2024-07-23 06:19:07.510134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5a 00:05:56.301 [2024-07-23 06:19:07.510311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=2d8e01e 00:05:56.301 [2024-07-23 06:19:07.510482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=c9dafff6 00:05:56.301 [2024-07-23 06:19:07.510654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.301 [2024-07-23 06:19:07.510825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=eb5f6f91001bc048, Actual=eb5f6f91001bc04c 00:05:56.301 [2024-07-23 06:19:07.510996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.511168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.511340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4005e 00:05:56.301 [2024-07-23 06:19:07.511521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4005e 00:05:56.301 [2024-07-23 06:19:07.511692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.301 [2024-07-23 06:19:07.511864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=c3012d0a84fb4707 00:05:56.301 passed 00:05:56.301 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-23 06:19:07.511915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:05:56.301 [2024-07-23 06:19:07.511958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=44e3, Actual=44e7 00:05:56.301 [2024-07-23 06:19:07.512000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.512049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.512092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.301 [2024-07-23 06:19:07.512135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.301 [2024-07-23 06:19:07.512178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=8823 00:05:56.301 [2024-07-23 06:19:07.512220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=f3c 00:05:56.301 [2024-07-23 06:19:07.512262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:05:56.301 [2024-07-23 06:19:07.512304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1f876781, Actual=1f876785 00:05:56.301 [2024-07-23 06:19:07.512346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.512387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.512429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.301 [2024-07-23 06:19:07.512471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:05:56.301 [2024-07-23 06:19:07.512513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2d8e01e 00:05:56.301 [2024-07-23 06:19:07.512554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ea6ceb81 00:05:56.301 [2024-07-23 06:19:07.512597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d7, Actual=a576a7728ecc20d3 00:05:56.301 [2024-07-23 06:19:07.512640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20f4e219c84f8d, Actual=fe20f4e219c84f89 00:05:56.301 [2024-07-23 06:19:07.512698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.512742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:05:56.301 [2024-07-23 06:19:07.512784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.301 [2024-07-23 06:19:07.512827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:05:56.302 passed 00:05:56.302 Test: set_md_interleave_iovs_test ...[2024-07-23 06:19:07.512869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f93ad86b09e6e62e 00:05:56.302 [2024-07-23 06:19:07.512911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=d67eb6799d28c8c2 00:05:56.302 passed 00:05:56.302 Test: set_md_interleave_iovs_split_test ...passed 00:05:56.302 Test: dif_generate_stream_pi_16_test ...passed 00:05:56.302 Test: dif_generate_stream_test ...passed 00:05:56.302 Test: set_md_interleave_iovs_alignment_test ...passed 00:05:56.302 Test: dif_generate_split_test ...[2024-07-23 06:19:07.513807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:05:56.302 passed 00:05:56.302 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:05:56.302 Test: dif_verify_split_test ...passed 00:05:56.302 Test: dif_verify_stream_multi_segments_test ...passed 00:05:56.302 Test: update_crc32c_pi_16_test ...passed 00:05:56.302 Test: update_crc32c_test ...passed 00:05:56.302 Test: dif_update_crc32c_split_test ...passed 00:05:56.302 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:05:56.302 Test: get_range_with_md_test ...passed 00:05:56.302 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:05:56.302 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:05:56.302 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:56.302 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:05:56.302 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:05:56.302 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:56.302 Test: dif_generate_and_verify_unmap_test ...passed 00:05:56.302 Test: dif_pi_format_check_test ...passed 00:05:56.302 Test: dif_type_check_test ...passed 00:05:56.302 00:05:56.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.302 suites 1 1 n/a 0 0 00:05:56.302 tests 86 86 86 0 0 00:05:56.302 asserts 3605 3605 3605 0 n/a 00:05:56.302 00:05:56.302 Elapsed time = 0.039 seconds 00:05:56.302 06:19:07 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:05:56.302 00:05:56.302 00:05:56.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.302 http://cunit.sourceforge.net/ 00:05:56.302 00:05:56.302 00:05:56.302 Suite: iov 00:05:56.302 Test: test_single_iov ...passed 00:05:56.302 Test: test_simple_iov ...passed 00:05:56.302 Test: test_complex_iov ...passed 00:05:56.302 Test: test_iovs_to_buf ...passed 00:05:56.302 Test: test_buf_to_iovs ...passed 00:05:56.302 Test: test_memset ...passed 00:05:56.302 Test: test_iov_one ...passed 00:05:56.302 Test: test_iov_xfer ...passed 00:05:56.302 00:05:56.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.302 suites 1 1 n/a 0 0 00:05:56.302 tests 8 8 8 0 0 00:05:56.302 asserts 156 156 156 0 n/a 00:05:56.302 00:05:56.302 Elapsed time = 0.000 seconds 00:05:56.302 06:19:07 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:05:56.302 00:05:56.302 00:05:56.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.302 http://cunit.sourceforge.net/ 00:05:56.302 00:05:56.302 00:05:56.302 Suite: math 00:05:56.302 Test: test_serial_number_arithmetic ...passed 00:05:56.302 Suite: erase 00:05:56.302 Test: test_memset_s ...passed 00:05:56.302 00:05:56.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.302 suites 2 2 n/a 0 0 00:05:56.302 tests 2 2 2 0 0 00:05:56.302 asserts 18 18 18 0 n/a 00:05:56.302 00:05:56.302 Elapsed time = 0.000 seconds 00:05:56.302 06:19:07 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:05:56.302 00:05:56.302 00:05:56.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.302 http://cunit.sourceforge.net/ 00:05:56.302 00:05:56.302 00:05:56.302 Suite: pipe 00:05:56.302 Test: test_create_destroy ...passed 00:05:56.302 Test: test_write_get_buffer ...passed 00:05:56.302 Test: test_write_advance ...passed 00:05:56.302 Test: test_read_get_buffer ...passed 00:05:56.302 Test: test_read_advance ...passed 00:05:56.302 Test: test_data ...passed 00:05:56.302 00:05:56.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.302 suites 1 1 n/a 0 0 00:05:56.302 tests 6 6 6 0 0 00:05:56.302 asserts 251 251 251 0 n/a 00:05:56.302 00:05:56.302 Elapsed time = 0.000 seconds 00:05:56.302 06:19:07 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:05:56.302 00:05:56.302 00:05:56.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.302 http://cunit.sourceforge.net/ 00:05:56.302 00:05:56.302 00:05:56.302 Suite: xor 00:05:56.302 Test: test_xor_gen ...passed 00:05:56.302 00:05:56.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.302 suites 1 1 n/a 0 0 00:05:56.302 tests 1 1 1 0 0 00:05:56.302 asserts 17 17 17 0 n/a 00:05:56.302 00:05:56.302 Elapsed time = 0.000 seconds 00:05:56.302 00:05:56.302 real 0m0.118s 00:05:56.302 user 0m0.067s 00:05:56.302 sys 0m0.055s 00:05:56.302 06:19:07 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.302 06:19:07 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:05:56.302 ************************************ 00:05:56.302 END TEST unittest_util 00:05:56.302 ************************************ 00:05:56.302 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.302 06:19:07 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:56.302 06:19:07 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:56.302 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.302 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.302 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.302 ************************************ 00:05:56.302 START TEST unittest_dma 00:05:56.302 ************************************ 00:05:56.302 06:19:07 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:56.302 00:05:56.302 00:05:56.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.302 http://cunit.sourceforge.net/ 00:05:56.302 00:05:56.302 00:05:56.302 Suite: dma_suite 00:05:56.302 Test: test_dma ...passed 00:05:56.302 00:05:56.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.302 suites 1 1 n/a 0 0 00:05:56.302 tests 1 1 1 0 0 00:05:56.302 asserts 54 54 54 0 n/a 00:05:56.302 00:05:56.302 Elapsed time = 0.000 seconds 00:05:56.303 [2024-07-23 06:19:07.586567] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:05:56.303 00:05:56.303 real 0m0.006s 00:05:56.303 user 0m0.005s 00:05:56.303 sys 0m0.005s 00:05:56.303 06:19:07 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.303 06:19:07 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 END TEST unittest_dma 00:05:56.303 ************************************ 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.303 06:19:07 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 START TEST unittest_init 00:05:56.303 ************************************ 00:05:56.303 06:19:07 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:05:56.303 06:19:07 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:05:56.303 00:05:56.303 00:05:56.303 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.303 http://cunit.sourceforge.net/ 00:05:56.303 00:05:56.303 00:05:56.303 Suite: subsystem_suite 00:05:56.303 Test: subsystem_sort_test_depends_on_single ...passed 00:05:56.303 Test: subsystem_sort_test_depends_on_multiple ...passed 00:05:56.303 Test: subsystem_sort_test_missing_dependency ...[2024-07-23 06:19:07.627408] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:05:56.303 [2024-07-23 06:19:07.627657] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:05:56.303 passed 00:05:56.303 00:05:56.303 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.303 suites 1 1 n/a 0 0 00:05:56.303 tests 3 3 3 0 0 00:05:56.303 asserts 20 20 20 0 n/a 00:05:56.303 00:05:56.303 Elapsed time = 0.000 seconds 00:05:56.303 00:05:56.303 real 0m0.007s 00:05:56.303 user 0m0.000s 00:05:56.303 sys 0m0.008s 00:05:56.303 06:19:07 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.303 ************************************ 00:05:56.303 06:19:07 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 END TEST unittest_init 00:05:56.303 ************************************ 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.303 06:19:07 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 START TEST unittest_keyring 00:05:56.303 ************************************ 00:05:56.303 06:19:07 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:05:56.303 00:05:56.303 00:05:56.303 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.303 http://cunit.sourceforge.net/ 00:05:56.303 00:05:56.303 00:05:56.303 Suite: keyring 00:05:56.303 Test: test_keyring_add_remove ...[2024-07-23 06:19:07.676069] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:05:56.303 passed 00:05:56.303 Test: test_keyring_get_put ...passed 00:05:56.303 00:05:56.303 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.303 suites 1 1 n/a 0 0 00:05:56.303 tests 2 2 2 0 0 00:05:56.303 asserts 44 44 44 0 n/a 00:05:56.303 00:05:56.303 Elapsed time = 0.000 seconds 00:05:56.303 [2024-07-23 06:19:07.676328] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:05:56.303 [2024-07-23 06:19:07.676353] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:05:56.303 00:05:56.303 real 0m0.006s 00:05:56.303 user 0m0.005s 00:05:56.303 sys 0m0.001s 00:05:56.303 06:19:07 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.303 06:19:07 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 END TEST unittest_keyring 00:05:56.303 ************************************ 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:56.303 06:19:07 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:05:56.303 00:05:56.303 00:05:56.303 ===================== 00:05:56.303 All unit tests passed 00:05:56.303 ===================== 00:05:56.303 06:19:07 unittest -- unit/unittest.sh@305 -- # set +x 00:05:56.303 WARN: lcov not installed or SPDK built without coverage! 00:05:56.303 WARN: neither valgrind nor ASAN is enabled! 00:05:56.303 00:05:56.303 00:05:56.303 00:05:56.303 real 0m30.810s 00:05:56.303 user 0m13.123s 00:05:56.303 sys 0m1.333s 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.303 06:19:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 END TEST unittest 00:05:56.303 ************************************ 00:05:56.303 06:19:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.303 06:19:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:56.303 06:19:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:56.303 06:19:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:56.303 06:19:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:56.303 06:19:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.303 06:19:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 06:19:07 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:56.303 06:19:07 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:56.303 06:19:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.303 06:19:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.303 06:19:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 START TEST env 00:05:56.303 ************************************ 00:05:56.303 06:19:07 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:56.303 * Looking for test storage... 00:05:56.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:56.303 06:19:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:56.303 06:19:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.303 06:19:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.303 06:19:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 START TEST env_memory 00:05:56.303 ************************************ 00:05:56.303 06:19:07 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:56.303 00:05:56.303 00:05:56.303 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.303 http://cunit.sourceforge.net/ 00:05:56.303 00:05:56.303 00:05:56.303 Suite: memory 00:05:56.303 Test: alloc and free memory map ...[2024-07-23 06:19:07.922805] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:56.303 passed 00:05:56.303 Test: mem map translation ...[2024-07-23 06:19:07.930584] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:56.303 [2024-07-23 06:19:07.930627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:56.303 [2024-07-23 06:19:07.930658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:56.303 [2024-07-23 06:19:07.930674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:56.303 passed 00:05:56.303 Test: mem map registration ...[2024-07-23 06:19:07.940474] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:56.303 [2024-07-23 06:19:07.940515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:56.303 passed 00:05:56.303 Test: mem map adjacent registrations ...passed 00:05:56.303 00:05:56.303 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.303 suites 1 1 n/a 0 0 00:05:56.303 tests 4 4 4 0 0 00:05:56.303 asserts 152 152 152 0 n/a 00:05:56.303 00:05:56.303 Elapsed time = 0.031 seconds 00:05:56.303 00:05:56.303 real 0m0.041s 00:05:56.303 user 0m0.043s 00:05:56.304 sys 0m0.007s 00:05:56.304 06:19:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.304 06:19:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:56.304 ************************************ 00:05:56.304 END TEST env_memory 00:05:56.304 ************************************ 00:05:56.304 06:19:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:56.304 06:19:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:56.304 06:19:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.304 06:19:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.304 06:19:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.304 ************************************ 00:05:56.304 START TEST env_vtophys 00:05:56.304 ************************************ 00:05:56.304 06:19:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:56.304 EAL: lib.eal log level changed from notice to debug 00:05:56.304 EAL: Sysctl reports 10 cpus 00:05:56.304 EAL: Detected lcore 0 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 1 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 2 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 3 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 4 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 5 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 6 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 7 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 8 as core 0 on socket 0 00:05:56.304 EAL: Detected lcore 9 as core 0 on socket 0 00:05:56.304 EAL: Maximum logical cores by configuration: 128 00:05:56.304 EAL: Detected CPU lcores: 10 00:05:56.304 EAL: Detected NUMA nodes: 1 00:05:56.304 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:56.304 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:56.304 EAL: Checking presence of .so 'librte_eal.so' 00:05:56.304 EAL: Detected static linkage of DPDK 00:05:56.304 EAL: No shared files mode enabled, IPC will be disabled 00:05:56.304 EAL: PCI scan found 10 devices 00:05:56.304 EAL: Specific IOVA mode is not requested, autodetecting 00:05:56.304 EAL: Selecting IOVA mode according to bus requests 00:05:56.304 EAL: Bus pci wants IOVA as 'PA' 00:05:56.304 EAL: Selected IOVA mode 'PA' 00:05:56.304 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:05:56.304 EAL: Ask a virtual area of 0x2e000 bytes 00:05:56.304 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000304000) not respected! 00:05:56.304 EAL: This may cause issues with mapping memory into secondary processes 00:05:56.304 EAL: Virtual area found at 0x1000304000 (size = 0x2e000) 00:05:56.304 EAL: Setting up physically contiguous memory... 00:05:56.304 EAL: Ask a virtual area of 0x1000 bytes 00:05:56.304 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000e76000) not respected! 00:05:56.304 EAL: This may cause issues with mapping memory into secondary processes 00:05:56.304 EAL: Virtual area found at 0x1000e76000 (size = 0x1000) 00:05:56.304 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:05:56.304 EAL: Ask a virtual area of 0xf0000000 bytes 00:05:56.304 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:05:56.304 EAL: This may cause issues with mapping memory into secondary processes 00:05:56.304 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:05:56.304 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:05:56.304 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x40000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x60000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x70000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x110000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x120000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x130000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x140000000, len 268435456 00:05:56.304 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x150000000, len 268435456 00:05:56.304 EAL: No shared files mode enabled, IPC is disabled 00:05:56.304 EAL: Added 2048M to heap on socket 0 00:05:56.304 EAL: TSC is not safe to use in SMP mode 00:05:56.304 EAL: TSC is not invariant 00:05:56.304 EAL: TSC frequency is ~2199994 KHz 00:05:56.304 EAL: Main lcore 0 is ready (tid=1d0cc5812000;cpuset=[0]) 00:05:56.304 EAL: PCI scan found 10 devices 00:05:56.304 EAL: Registering mem event callbacks not supported 00:05:56.304 00:05:56.304 00:05:56.304 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.304 http://cunit.sourceforge.net/ 00:05:56.304 00:05:56.304 00:05:56.304 Suite: components_suite 00:05:56.304 Test: vtophys_malloc_test ...passed 00:05:56.563 Test: vtophys_spdk_malloc_test ...passed 00:05:56.563 00:05:56.563 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.563 suites 1 1 n/a 0 0 00:05:56.563 tests 2 2 2 0 0 00:05:56.563 asserts 497 497 497 0 n/a 00:05:56.563 00:05:56.563 Elapsed time = 0.383 seconds 00:05:56.563 00:05:56.563 real 0m0.960s 00:05:56.563 user 0m0.386s 00:05:56.563 sys 0m0.572s 00:05:56.563 06:19:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.563 06:19:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.563 ************************************ 00:05:56.563 END TEST env_vtophys 00:05:56.563 ************************************ 00:05:56.563 06:19:08 env -- common/autotest_common.sh@1142 -- # return 0 00:05:56.563 06:19:08 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.563 06:19:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.563 06:19:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.563 06:19:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.563 ************************************ 00:05:56.563 START TEST env_pci 00:05:56.563 ************************************ 00:05:56.563 06:19:08 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.563 00:05:56.563 00:05:56.563 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.563 http://cunit.sourceforge.net/ 00:05:56.563 00:05:56.563 00:05:56.563 Suite: pci 00:05:56.563 Test: pci_hook ...passed 00:05:56.563 00:05:56.563 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.563 suites 1 1 n/a 0 0 00:05:56.563 tests 1 1 1 0 0 00:05:56.563 asserts 25 25 25 0 n/a 00:05:56.563 00:05:56.563 Elapsed time = 0.000 seconds 00:05:56.563 EAL: Cannot find device (10000:00:01.0) 00:05:56.563 EAL: Failed to attach device on primary process 00:05:56.563 00:05:56.563 real 0m0.008s 00:05:56.563 user 0m0.008s 00:05:56.563 sys 0m0.000s 00:05:56.563 06:19:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.563 06:19:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.563 ************************************ 00:05:56.563 END TEST env_pci 00:05:56.563 ************************************ 00:05:56.563 06:19:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:56.563 06:19:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.563 06:19:09 env -- env/env.sh@15 -- # uname 00:05:56.563 06:19:09 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:05:56.563 06:19:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:05:56.563 06:19:09 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:56.563 06:19:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.563 06:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.563 ************************************ 00:05:56.563 START TEST env_dpdk_post_init 00:05:56.564 ************************************ 00:05:56.564 06:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:05:56.564 EAL: Sysctl reports 10 cpus 00:05:56.564 EAL: Detected CPU lcores: 10 00:05:56.564 EAL: Detected NUMA nodes: 1 00:05:56.564 EAL: Detected static linkage of DPDK 00:05:56.564 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.564 EAL: Selected IOVA mode 'PA' 00:05:56.564 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:05:56.823 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x40000000, len 268435456 00:05:56.823 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x60000000, len 268435456 00:05:56.823 EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x70000000, len 268435456 00:05:56.823 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x110000000, len 268435456 00:05:57.082 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x120000000, len 268435456 00:05:57.082 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x130000000, len 268435456 00:05:57.082 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x140000000, len 268435456 00:05:57.082 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x150000000, len 268435456 00:05:57.082 EAL: TSC is not safe to use in SMP mode 00:05:57.082 EAL: TSC is not invariant 00:05:57.082 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.082 [2024-07-23 06:19:09.590444] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:05:57.082 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:57.342 Starting DPDK initialization... 00:05:57.342 Starting SPDK post initialization... 00:05:57.342 SPDK NVMe probe 00:05:57.342 Attaching to 0000:00:10.0 00:05:57.342 Attached to 0000:00:10.0 00:05:57.342 Cleaning up... 00:05:57.342 00:05:57.342 real 0m0.585s 00:05:57.342 user 0m0.000s 00:05:57.342 sys 0m0.581s 00:05:57.342 06:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.342 06:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.342 ************************************ 00:05:57.342 END TEST env_dpdk_post_init 00:05:57.342 ************************************ 00:05:57.342 06:19:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:57.342 06:19:09 env -- env/env.sh@26 -- # uname 00:05:57.342 06:19:09 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:05:57.342 00:05:57.342 real 0m1.906s 00:05:57.342 user 0m0.609s 00:05:57.342 sys 0m1.305s 00:05:57.342 06:19:09 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.342 06:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.342 ************************************ 00:05:57.342 END TEST env 00:05:57.342 ************************************ 00:05:57.342 06:19:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.342 06:19:09 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:57.342 06:19:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.342 06:19:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.342 06:19:09 -- common/autotest_common.sh@10 -- # set +x 00:05:57.342 ************************************ 00:05:57.342 START TEST rpc 00:05:57.342 ************************************ 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:57.342 * Looking for test storage... 00:05:57.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.342 06:19:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45530 00:05:57.342 06:19:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.342 06:19:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:57.342 06:19:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45530 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@829 -- # '[' -z 45530 ']' 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.342 06:19:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.601 [2024-07-23 06:19:09.861426] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:57.601 [2024-07-23 06:19:09.861608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:58.168 EAL: TSC is not safe to use in SMP mode 00:05:58.168 EAL: TSC is not invariant 00:05:58.168 [2024-07-23 06:19:10.393918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.168 [2024-07-23 06:19:10.508127] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:58.168 [2024-07-23 06:19:10.510744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:58.168 [2024-07-23 06:19:10.510784] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45530' to capture a snapshot of events at runtime. 00:05:58.168 [2024-07-23 06:19:10.510811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.426 06:19:10 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.426 06:19:10 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.426 06:19:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.426 06:19:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.426 06:19:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.426 06:19:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.427 06:19:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.427 06:19:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.427 06:19:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.427 ************************************ 00:05:58.427 START TEST rpc_integrity 00:05:58.427 ************************************ 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.427 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.427 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.686 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.686 { 00:05:58.686 "name": "Malloc0", 00:05:58.686 "aliases": [ 00:05:58.686 "79ae89d2-48bb-11ef-a06c-59ddad71024c" 00:05:58.686 ], 00:05:58.686 "product_name": "Malloc disk", 00:05:58.686 "block_size": 512, 00:05:58.686 "num_blocks": 16384, 00:05:58.686 "uuid": "79ae89d2-48bb-11ef-a06c-59ddad71024c", 00:05:58.686 "assigned_rate_limits": { 00:05:58.686 "rw_ios_per_sec": 0, 00:05:58.686 "rw_mbytes_per_sec": 0, 00:05:58.686 "r_mbytes_per_sec": 0, 00:05:58.686 "w_mbytes_per_sec": 0 00:05:58.686 }, 00:05:58.686 "claimed": false, 00:05:58.686 "zoned": false, 00:05:58.686 "supported_io_types": { 00:05:58.686 "read": true, 00:05:58.686 "write": true, 00:05:58.686 "unmap": true, 00:05:58.686 "flush": true, 00:05:58.686 "reset": true, 00:05:58.686 "nvme_admin": false, 00:05:58.686 "nvme_io": false, 00:05:58.686 "nvme_io_md": false, 00:05:58.686 "write_zeroes": true, 00:05:58.686 "zcopy": true, 00:05:58.686 "get_zone_info": false, 00:05:58.686 "zone_management": false, 00:05:58.686 "zone_append": false, 00:05:58.686 "compare": false, 00:05:58.686 "compare_and_write": false, 00:05:58.686 "abort": true, 00:05:58.686 "seek_hole": false, 00:05:58.686 "seek_data": false, 00:05:58.686 "copy": true, 00:05:58.686 "nvme_iov_md": false 00:05:58.686 }, 00:05:58.686 "memory_domains": [ 00:05:58.686 { 00:05:58.686 "dma_device_id": "system", 00:05:58.686 "dma_device_type": 1 00:05:58.686 }, 00:05:58.686 { 00:05:58.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.686 "dma_device_type": 2 00:05:58.686 } 00:05:58.686 ], 00:05:58.686 "driver_specific": {} 00:05:58.686 } 00:05:58.686 ]' 00:05:58.686 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.686 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.686 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.686 [2024-07-23 06:19:10.966045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:58.686 [2024-07-23 06:19:10.966092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.686 [2024-07-23 06:19:10.966665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28f112837a00 00:05:58.686 [2024-07-23 06:19:10.966689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.686 [2024-07-23 06:19:10.967608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.686 [2024-07-23 06:19:10.967630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.686 Passthru0 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.686 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.686 06:19:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.686 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.686 { 00:05:58.686 "name": "Malloc0", 00:05:58.686 "aliases": [ 00:05:58.686 "79ae89d2-48bb-11ef-a06c-59ddad71024c" 00:05:58.686 ], 00:05:58.686 "product_name": "Malloc disk", 00:05:58.686 "block_size": 512, 00:05:58.686 "num_blocks": 16384, 00:05:58.686 "uuid": "79ae89d2-48bb-11ef-a06c-59ddad71024c", 00:05:58.686 "assigned_rate_limits": { 00:05:58.686 "rw_ios_per_sec": 0, 00:05:58.686 "rw_mbytes_per_sec": 0, 00:05:58.686 "r_mbytes_per_sec": 0, 00:05:58.686 "w_mbytes_per_sec": 0 00:05:58.686 }, 00:05:58.686 "claimed": true, 00:05:58.686 "claim_type": "exclusive_write", 00:05:58.686 "zoned": false, 00:05:58.686 "supported_io_types": { 00:05:58.686 "read": true, 00:05:58.686 "write": true, 00:05:58.686 "unmap": true, 00:05:58.686 "flush": true, 00:05:58.686 "reset": true, 00:05:58.686 "nvme_admin": false, 00:05:58.686 "nvme_io": false, 00:05:58.686 "nvme_io_md": false, 00:05:58.686 "write_zeroes": true, 00:05:58.686 "zcopy": true, 00:05:58.686 "get_zone_info": false, 00:05:58.686 "zone_management": false, 00:05:58.686 "zone_append": false, 00:05:58.686 "compare": false, 00:05:58.686 "compare_and_write": false, 00:05:58.686 "abort": true, 00:05:58.686 "seek_hole": false, 00:05:58.686 "seek_data": false, 00:05:58.686 "copy": true, 00:05:58.686 "nvme_iov_md": false 00:05:58.686 }, 00:05:58.686 "memory_domains": [ 00:05:58.686 { 00:05:58.686 "dma_device_id": "system", 00:05:58.686 "dma_device_type": 1 00:05:58.686 }, 00:05:58.686 { 00:05:58.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.686 "dma_device_type": 2 00:05:58.686 } 00:05:58.686 ], 00:05:58.686 "driver_specific": {} 00:05:58.686 }, 00:05:58.686 { 00:05:58.686 "name": "Passthru0", 00:05:58.686 "aliases": [ 00:05:58.686 "c98e428f-731d-a259-b53f-2e861e8e046f" 00:05:58.686 ], 00:05:58.686 "product_name": "passthru", 00:05:58.686 "block_size": 512, 00:05:58.687 "num_blocks": 16384, 00:05:58.687 "uuid": "c98e428f-731d-a259-b53f-2e861e8e046f", 00:05:58.687 "assigned_rate_limits": { 00:05:58.687 "rw_ios_per_sec": 0, 00:05:58.687 "rw_mbytes_per_sec": 0, 00:05:58.687 "r_mbytes_per_sec": 0, 00:05:58.687 "w_mbytes_per_sec": 0 00:05:58.687 }, 00:05:58.687 "claimed": false, 00:05:58.687 "zoned": false, 00:05:58.687 "supported_io_types": { 00:05:58.687 "read": true, 00:05:58.687 "write": true, 00:05:58.687 "unmap": true, 00:05:58.687 "flush": true, 00:05:58.687 "reset": true, 00:05:58.687 "nvme_admin": false, 00:05:58.687 "nvme_io": false, 00:05:58.687 "nvme_io_md": false, 00:05:58.687 "write_zeroes": true, 00:05:58.687 "zcopy": true, 00:05:58.687 "get_zone_info": false, 00:05:58.687 "zone_management": false, 00:05:58.687 "zone_append": false, 00:05:58.687 "compare": false, 00:05:58.687 "compare_and_write": false, 00:05:58.687 "abort": true, 00:05:58.687 "seek_hole": false, 00:05:58.687 "seek_data": false, 00:05:58.687 "copy": true, 00:05:58.687 "nvme_iov_md": false 00:05:58.687 }, 00:05:58.687 "memory_domains": [ 00:05:58.687 { 00:05:58.687 "dma_device_id": "system", 00:05:58.687 "dma_device_type": 1 00:05:58.687 }, 00:05:58.687 { 00:05:58.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.687 "dma_device_type": 2 00:05:58.687 } 00:05:58.687 ], 00:05:58.687 "driver_specific": { 00:05:58.687 "passthru": { 00:05:58.687 "name": "Passthru0", 00:05:58.687 "base_bdev_name": "Malloc0" 00:05:58.687 } 00:05:58.687 } 00:05:58.687 } 00:05:58.687 ]' 00:05:58.687 06:19:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.687 06:19:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.687 00:05:58.687 real 0m0.125s 00:05:58.687 user 0m0.034s 00:05:58.687 sys 0m0.030s 00:05:58.687 ************************************ 00:05:58.687 END TEST rpc_integrity 00:05:58.687 ************************************ 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.687 06:19:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 ************************************ 00:05:58.687 START TEST rpc_plugins 00:05:58.687 ************************************ 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:58.687 { 00:05:58.687 "name": "Malloc1", 00:05:58.687 "aliases": [ 00:05:58.687 "79c5bb2b-48bb-11ef-a06c-59ddad71024c" 00:05:58.687 ], 00:05:58.687 "product_name": "Malloc disk", 00:05:58.687 "block_size": 4096, 00:05:58.687 "num_blocks": 256, 00:05:58.687 "uuid": "79c5bb2b-48bb-11ef-a06c-59ddad71024c", 00:05:58.687 "assigned_rate_limits": { 00:05:58.687 "rw_ios_per_sec": 0, 00:05:58.687 "rw_mbytes_per_sec": 0, 00:05:58.687 "r_mbytes_per_sec": 0, 00:05:58.687 "w_mbytes_per_sec": 0 00:05:58.687 }, 00:05:58.687 "claimed": false, 00:05:58.687 "zoned": false, 00:05:58.687 "supported_io_types": { 00:05:58.687 "read": true, 00:05:58.687 "write": true, 00:05:58.687 "unmap": true, 00:05:58.687 "flush": true, 00:05:58.687 "reset": true, 00:05:58.687 "nvme_admin": false, 00:05:58.687 "nvme_io": false, 00:05:58.687 "nvme_io_md": false, 00:05:58.687 "write_zeroes": true, 00:05:58.687 "zcopy": true, 00:05:58.687 "get_zone_info": false, 00:05:58.687 "zone_management": false, 00:05:58.687 "zone_append": false, 00:05:58.687 "compare": false, 00:05:58.687 "compare_and_write": false, 00:05:58.687 "abort": true, 00:05:58.687 "seek_hole": false, 00:05:58.687 "seek_data": false, 00:05:58.687 "copy": true, 00:05:58.687 "nvme_iov_md": false 00:05:58.687 }, 00:05:58.687 "memory_domains": [ 00:05:58.687 { 00:05:58.687 "dma_device_id": "system", 00:05:58.687 "dma_device_type": 1 00:05:58.687 }, 00:05:58.687 { 00:05:58.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.687 "dma_device_type": 2 00:05:58.687 } 00:05:58.687 ], 00:05:58.687 "driver_specific": {} 00:05:58.687 } 00:05:58.687 ]' 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:58.687 06:19:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:58.687 00:05:58.687 real 0m0.061s 00:05:58.687 user 0m0.002s 00:05:58.687 sys 0m0.029s 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 ************************************ 00:05:58.687 END TEST rpc_plugins 00:05:58.687 ************************************ 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.687 06:19:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.687 06:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 ************************************ 00:05:58.687 START TEST rpc_trace_cmd_test 00:05:58.687 ************************************ 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.687 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:58.687 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45530", 00:05:58.687 "tpoint_group_mask": "0x8", 00:05:58.687 "iscsi_conn": { 00:05:58.687 "mask": "0x2", 00:05:58.687 "tpoint_mask": "0x0" 00:05:58.687 }, 00:05:58.687 "scsi": { 00:05:58.687 "mask": "0x4", 00:05:58.687 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "bdev": { 00:05:58.688 "mask": "0x8", 00:05:58.688 "tpoint_mask": "0xffffffffffffffff" 00:05:58.688 }, 00:05:58.688 "nvmf_rdma": { 00:05:58.688 "mask": "0x10", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "nvmf_tcp": { 00:05:58.688 "mask": "0x20", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "blobfs": { 00:05:58.688 "mask": "0x80", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "dsa": { 00:05:58.688 "mask": "0x200", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "thread": { 00:05:58.688 "mask": "0x400", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "nvme_pcie": { 00:05:58.688 "mask": "0x800", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "iaa": { 00:05:58.688 "mask": "0x1000", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "nvme_tcp": { 00:05:58.688 "mask": "0x2000", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "bdev_nvme": { 00:05:58.688 "mask": "0x4000", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 }, 00:05:58.688 "sock": { 00:05:58.688 "mask": "0x8000", 00:05:58.688 "tpoint_mask": "0x0" 00:05:58.688 } 00:05:58.688 }' 00:05:58.688 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:58.947 00:05:58.947 real 0m0.051s 00:05:58.947 user 0m0.025s 00:05:58.947 sys 0m0.018s 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.947 06:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.947 ************************************ 00:05:58.947 END TEST rpc_trace_cmd_test 00:05:58.947 ************************************ 00:05:58.947 06:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.947 06:19:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:58.947 06:19:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:58.947 06:19:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:58.947 06:19:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.947 06:19:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.947 06:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.947 ************************************ 00:05:58.947 START TEST rpc_daemon_integrity 00:05:58.947 ************************************ 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.947 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.948 { 00:05:58.948 "name": "Malloc2", 00:05:58.948 "aliases": [ 00:05:58.948 "79e57887-48bb-11ef-a06c-59ddad71024c" 00:05:58.948 ], 00:05:58.948 "product_name": "Malloc disk", 00:05:58.948 "block_size": 512, 00:05:58.948 "num_blocks": 16384, 00:05:58.948 "uuid": "79e57887-48bb-11ef-a06c-59ddad71024c", 00:05:58.948 "assigned_rate_limits": { 00:05:58.948 "rw_ios_per_sec": 0, 00:05:58.948 "rw_mbytes_per_sec": 0, 00:05:58.948 "r_mbytes_per_sec": 0, 00:05:58.948 "w_mbytes_per_sec": 0 00:05:58.948 }, 00:05:58.948 "claimed": false, 00:05:58.948 "zoned": false, 00:05:58.948 "supported_io_types": { 00:05:58.948 "read": true, 00:05:58.948 "write": true, 00:05:58.948 "unmap": true, 00:05:58.948 "flush": true, 00:05:58.948 "reset": true, 00:05:58.948 "nvme_admin": false, 00:05:58.948 "nvme_io": false, 00:05:58.948 "nvme_io_md": false, 00:05:58.948 "write_zeroes": true, 00:05:58.948 "zcopy": true, 00:05:58.948 "get_zone_info": false, 00:05:58.948 "zone_management": false, 00:05:58.948 "zone_append": false, 00:05:58.948 "compare": false, 00:05:58.948 "compare_and_write": false, 00:05:58.948 "abort": true, 00:05:58.948 "seek_hole": false, 00:05:58.948 "seek_data": false, 00:05:58.948 "copy": true, 00:05:58.948 "nvme_iov_md": false 00:05:58.948 }, 00:05:58.948 "memory_domains": [ 00:05:58.948 { 00:05:58.948 "dma_device_id": "system", 00:05:58.948 "dma_device_type": 1 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.948 "dma_device_type": 2 00:05:58.948 } 00:05:58.948 ], 00:05:58.948 "driver_specific": {} 00:05:58.948 } 00:05:58.948 ]' 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 [2024-07-23 06:19:11.330062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:58.948 [2024-07-23 06:19:11.330108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.948 [2024-07-23 06:19:11.330136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28f112837a00 00:05:58.948 [2024-07-23 06:19:11.330145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.948 [2024-07-23 06:19:11.330799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.948 [2024-07-23 06:19:11.330827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.948 Passthru0 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.948 { 00:05:58.948 "name": "Malloc2", 00:05:58.948 "aliases": [ 00:05:58.948 "79e57887-48bb-11ef-a06c-59ddad71024c" 00:05:58.948 ], 00:05:58.948 "product_name": "Malloc disk", 00:05:58.948 "block_size": 512, 00:05:58.948 "num_blocks": 16384, 00:05:58.948 "uuid": "79e57887-48bb-11ef-a06c-59ddad71024c", 00:05:58.948 "assigned_rate_limits": { 00:05:58.948 "rw_ios_per_sec": 0, 00:05:58.948 "rw_mbytes_per_sec": 0, 00:05:58.948 "r_mbytes_per_sec": 0, 00:05:58.948 "w_mbytes_per_sec": 0 00:05:58.948 }, 00:05:58.948 "claimed": true, 00:05:58.948 "claim_type": "exclusive_write", 00:05:58.948 "zoned": false, 00:05:58.948 "supported_io_types": { 00:05:58.948 "read": true, 00:05:58.948 "write": true, 00:05:58.948 "unmap": true, 00:05:58.948 "flush": true, 00:05:58.948 "reset": true, 00:05:58.948 "nvme_admin": false, 00:05:58.948 "nvme_io": false, 00:05:58.948 "nvme_io_md": false, 00:05:58.948 "write_zeroes": true, 00:05:58.948 "zcopy": true, 00:05:58.948 "get_zone_info": false, 00:05:58.948 "zone_management": false, 00:05:58.948 "zone_append": false, 00:05:58.948 "compare": false, 00:05:58.948 "compare_and_write": false, 00:05:58.948 "abort": true, 00:05:58.948 "seek_hole": false, 00:05:58.948 "seek_data": false, 00:05:58.948 "copy": true, 00:05:58.948 "nvme_iov_md": false 00:05:58.948 }, 00:05:58.948 "memory_domains": [ 00:05:58.948 { 00:05:58.948 "dma_device_id": "system", 00:05:58.948 "dma_device_type": 1 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.948 "dma_device_type": 2 00:05:58.948 } 00:05:58.948 ], 00:05:58.948 "driver_specific": {} 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "name": "Passthru0", 00:05:58.948 "aliases": [ 00:05:58.948 "c1a56ec8-40e3-6359-b12c-2fc97337fbde" 00:05:58.948 ], 00:05:58.948 "product_name": "passthru", 00:05:58.948 "block_size": 512, 00:05:58.948 "num_blocks": 16384, 00:05:58.948 "uuid": "c1a56ec8-40e3-6359-b12c-2fc97337fbde", 00:05:58.948 "assigned_rate_limits": { 00:05:58.948 "rw_ios_per_sec": 0, 00:05:58.948 "rw_mbytes_per_sec": 0, 00:05:58.948 "r_mbytes_per_sec": 0, 00:05:58.948 "w_mbytes_per_sec": 0 00:05:58.948 }, 00:05:58.948 "claimed": false, 00:05:58.948 "zoned": false, 00:05:58.948 "supported_io_types": { 00:05:58.948 "read": true, 00:05:58.948 "write": true, 00:05:58.948 "unmap": true, 00:05:58.948 "flush": true, 00:05:58.948 "reset": true, 00:05:58.948 "nvme_admin": false, 00:05:58.948 "nvme_io": false, 00:05:58.948 "nvme_io_md": false, 00:05:58.948 "write_zeroes": true, 00:05:58.948 "zcopy": true, 00:05:58.948 "get_zone_info": false, 00:05:58.948 "zone_management": false, 00:05:58.948 "zone_append": false, 00:05:58.948 "compare": false, 00:05:58.948 "compare_and_write": false, 00:05:58.948 "abort": true, 00:05:58.948 "seek_hole": false, 00:05:58.948 "seek_data": false, 00:05:58.948 "copy": true, 00:05:58.948 "nvme_iov_md": false 00:05:58.948 }, 00:05:58.948 "memory_domains": [ 00:05:58.948 { 00:05:58.948 "dma_device_id": "system", 00:05:58.948 "dma_device_type": 1 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.948 "dma_device_type": 2 00:05:58.948 } 00:05:58.948 ], 00:05:58.948 "driver_specific": { 00:05:58.948 "passthru": { 00:05:58.948 "name": "Passthru0", 00:05:58.948 "base_bdev_name": "Malloc2" 00:05:58.948 } 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ]' 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.948 00:05:58.948 real 0m0.131s 00:05:58.948 user 0m0.038s 00:05:58.948 sys 0m0.036s 00:05:58.948 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.949 06:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.949 ************************************ 00:05:58.949 END TEST rpc_daemon_integrity 00:05:58.949 ************************************ 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.949 06:19:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:58.949 06:19:11 rpc -- rpc/rpc.sh@84 -- # killprocess 45530 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@948 -- # '[' -z 45530 ']' 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@952 -- # kill -0 45530 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@953 -- # uname 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45530 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@956 -- # tail -1 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:58.949 killing process with pid 45530 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45530' 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@967 -- # kill 45530 00:05:58.949 06:19:11 rpc -- common/autotest_common.sh@972 -- # wait 45530 00:05:59.207 00:05:59.207 real 0m2.005s 00:05:59.207 user 0m2.008s 00:05:59.207 sys 0m0.955s 00:05:59.207 06:19:11 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.207 06:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.207 ************************************ 00:05:59.207 END TEST rpc 00:05:59.207 ************************************ 00:05:59.467 06:19:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.467 06:19:11 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:59.467 06:19:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.467 06:19:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.467 06:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.467 ************************************ 00:05:59.467 START TEST skip_rpc 00:05:59.467 ************************************ 00:05:59.467 06:19:11 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:59.467 * Looking for test storage... 00:05:59.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:59.467 06:19:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:59.467 06:19:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.467 06:19:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:59.467 06:19:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.467 06:19:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.467 06:19:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.467 ************************************ 00:05:59.467 START TEST skip_rpc 00:05:59.467 ************************************ 00:05:59.468 06:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:59.468 06:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45706 00:05:59.468 06:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.468 06:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:59.468 06:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:59.468 [2024-07-23 06:19:11.932168] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:59.468 [2024-07-23 06:19:11.932402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:00.035 EAL: TSC is not safe to use in SMP mode 00:06:00.035 EAL: TSC is not invariant 00:06:00.035 [2024-07-23 06:19:12.471125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.294 [2024-07-23 06:19:12.569637] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:00.294 [2024-07-23 06:19:12.572139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45706 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45706 ']' 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45706 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45706 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:05.601 killing process with pid 45706 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45706' 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45706 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45706 00:06:05.601 00:06:05.601 real 0m5.387s 00:06:05.601 user 0m4.815s 00:06:05.601 sys 0m0.587s 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.601 06:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.601 ************************************ 00:06:05.601 END TEST skip_rpc 00:06:05.601 ************************************ 00:06:05.601 06:19:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.601 06:19:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:05.601 06:19:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.601 06:19:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.601 06:19:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.601 ************************************ 00:06:05.601 START TEST skip_rpc_with_json 00:06:05.601 ************************************ 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45751 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45751 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45751 ']' 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.601 06:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.601 [2024-07-23 06:19:17.361285] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:05.601 [2024-07-23 06:19:17.361527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:05.601 EAL: TSC is not safe to use in SMP mode 00:06:05.601 EAL: TSC is not invariant 00:06:05.601 [2024-07-23 06:19:17.891833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.602 [2024-07-23 06:19:17.979655] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:05.602 [2024-07-23 06:19:17.981759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.170 [2024-07-23 06:19:18.426642] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:06.170 request: 00:06:06.170 { 00:06:06.170 "trtype": "tcp", 00:06:06.170 "method": "nvmf_get_transports", 00:06:06.170 "req_id": 1 00:06:06.170 } 00:06:06.170 Got JSON-RPC error response 00:06:06.170 response: 00:06:06.170 { 00:06:06.170 "code": -19, 00:06:06.170 "message": "Operation not supported by device" 00:06:06.170 } 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.170 [2024-07-23 06:19:18.434653] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.170 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.170 { 00:06:06.170 "subsystems": [ 00:06:06.170 { 00:06:06.170 "subsystem": "vmd", 00:06:06.170 "config": [] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "iobuf", 00:06:06.170 "config": [ 00:06:06.170 { 00:06:06.170 "method": "iobuf_set_options", 00:06:06.170 "params": { 00:06:06.170 "small_pool_count": 8192, 00:06:06.170 "large_pool_count": 1024, 00:06:06.170 "small_bufsize": 8192, 00:06:06.170 "large_bufsize": 135168 00:06:06.170 } 00:06:06.170 } 00:06:06.170 ] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "scheduler", 00:06:06.170 "config": [ 00:06:06.170 { 00:06:06.170 "method": "framework_set_scheduler", 00:06:06.170 "params": { 00:06:06.170 "name": "static" 00:06:06.170 } 00:06:06.170 } 00:06:06.170 ] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "sock", 00:06:06.170 "config": [ 00:06:06.170 { 00:06:06.170 "method": "sock_set_default_impl", 00:06:06.170 "params": { 00:06:06.170 "impl_name": "posix" 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "sock_impl_set_options", 00:06:06.170 "params": { 00:06:06.170 "impl_name": "ssl", 00:06:06.170 "recv_buf_size": 4096, 00:06:06.170 "send_buf_size": 4096, 00:06:06.170 "enable_recv_pipe": true, 00:06:06.170 "enable_quickack": false, 00:06:06.170 "enable_placement_id": 0, 00:06:06.170 "enable_zerocopy_send_server": true, 00:06:06.170 "enable_zerocopy_send_client": false, 00:06:06.170 "zerocopy_threshold": 0, 00:06:06.170 "tls_version": 0, 00:06:06.170 "enable_ktls": false 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "sock_impl_set_options", 00:06:06.170 "params": { 00:06:06.170 "impl_name": "posix", 00:06:06.170 "recv_buf_size": 2097152, 00:06:06.170 "send_buf_size": 2097152, 00:06:06.170 "enable_recv_pipe": true, 00:06:06.170 "enable_quickack": false, 00:06:06.170 "enable_placement_id": 0, 00:06:06.170 "enable_zerocopy_send_server": true, 00:06:06.170 "enable_zerocopy_send_client": false, 00:06:06.170 "zerocopy_threshold": 0, 00:06:06.170 "tls_version": 0, 00:06:06.170 "enable_ktls": false 00:06:06.170 } 00:06:06.170 } 00:06:06.170 ] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "keyring", 00:06:06.170 "config": [] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "accel", 00:06:06.170 "config": [ 00:06:06.170 { 00:06:06.170 "method": "accel_set_options", 00:06:06.170 "params": { 00:06:06.170 "small_cache_size": 128, 00:06:06.170 "large_cache_size": 16, 00:06:06.170 "task_count": 2048, 00:06:06.170 "sequence_count": 2048, 00:06:06.170 "buf_count": 2048 00:06:06.170 } 00:06:06.170 } 00:06:06.170 ] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "bdev", 00:06:06.170 "config": [ 00:06:06.170 { 00:06:06.170 "method": "bdev_set_options", 00:06:06.170 "params": { 00:06:06.170 "bdev_io_pool_size": 65535, 00:06:06.170 "bdev_io_cache_size": 256, 00:06:06.170 "bdev_auto_examine": true, 00:06:06.170 "iobuf_small_cache_size": 128, 00:06:06.170 "iobuf_large_cache_size": 16 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "bdev_raid_set_options", 00:06:06.170 "params": { 00:06:06.170 "process_window_size_kb": 1024, 00:06:06.170 "process_max_bandwidth_mb_sec": 0 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "bdev_nvme_set_options", 00:06:06.170 "params": { 00:06:06.170 "action_on_timeout": "none", 00:06:06.170 "timeout_us": 0, 00:06:06.170 "timeout_admin_us": 0, 00:06:06.170 "keep_alive_timeout_ms": 10000, 00:06:06.170 "arbitration_burst": 0, 00:06:06.170 "low_priority_weight": 0, 00:06:06.170 "medium_priority_weight": 0, 00:06:06.170 "high_priority_weight": 0, 00:06:06.170 "nvme_adminq_poll_period_us": 10000, 00:06:06.170 "nvme_ioq_poll_period_us": 0, 00:06:06.170 "io_queue_requests": 0, 00:06:06.170 "delay_cmd_submit": true, 00:06:06.170 "transport_retry_count": 4, 00:06:06.170 "bdev_retry_count": 3, 00:06:06.170 "transport_ack_timeout": 0, 00:06:06.170 "ctrlr_loss_timeout_sec": 0, 00:06:06.170 "reconnect_delay_sec": 0, 00:06:06.170 "fast_io_fail_timeout_sec": 0, 00:06:06.170 "disable_auto_failback": false, 00:06:06.170 "generate_uuids": false, 00:06:06.170 "transport_tos": 0, 00:06:06.170 "nvme_error_stat": false, 00:06:06.170 "rdma_srq_size": 0, 00:06:06.170 "io_path_stat": false, 00:06:06.170 "allow_accel_sequence": false, 00:06:06.170 "rdma_max_cq_size": 0, 00:06:06.170 "rdma_cm_event_timeout_ms": 0, 00:06:06.170 "dhchap_digests": [ 00:06:06.170 "sha256", 00:06:06.170 "sha384", 00:06:06.170 "sha512" 00:06:06.170 ], 00:06:06.170 "dhchap_dhgroups": [ 00:06:06.170 "null", 00:06:06.170 "ffdhe2048", 00:06:06.170 "ffdhe3072", 00:06:06.170 "ffdhe4096", 00:06:06.170 "ffdhe6144", 00:06:06.170 "ffdhe8192" 00:06:06.170 ] 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "bdev_nvme_set_hotplug", 00:06:06.170 "params": { 00:06:06.170 "period_us": 100000, 00:06:06.170 "enable": false 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "bdev_wait_for_examine" 00:06:06.170 } 00:06:06.170 ] 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "scsi", 00:06:06.170 "config": null 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "subsystem": "nvmf", 00:06:06.170 "config": [ 00:06:06.170 { 00:06:06.170 "method": "nvmf_set_config", 00:06:06.170 "params": { 00:06:06.170 "discovery_filter": "match_any", 00:06:06.170 "admin_cmd_passthru": { 00:06:06.170 "identify_ctrlr": false 00:06:06.170 } 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "nvmf_set_max_subsystems", 00:06:06.170 "params": { 00:06:06.170 "max_subsystems": 1024 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.170 "method": "nvmf_set_crdt", 00:06:06.170 "params": { 00:06:06.170 "crdt1": 0, 00:06:06.170 "crdt2": 0, 00:06:06.170 "crdt3": 0 00:06:06.170 } 00:06:06.170 }, 00:06:06.170 { 00:06:06.171 "method": "nvmf_create_transport", 00:06:06.171 "params": { 00:06:06.171 "trtype": "TCP", 00:06:06.171 "max_queue_depth": 128, 00:06:06.171 "max_io_qpairs_per_ctrlr": 127, 00:06:06.171 "in_capsule_data_size": 4096, 00:06:06.171 "max_io_size": 131072, 00:06:06.171 "io_unit_size": 131072, 00:06:06.171 "max_aq_depth": 128, 00:06:06.171 "num_shared_buffers": 511, 00:06:06.171 "buf_cache_size": 4294967295, 00:06:06.171 "dif_insert_or_strip": false, 00:06:06.171 "zcopy": false, 00:06:06.171 "c2h_success": true, 00:06:06.171 "sock_priority": 0, 00:06:06.171 "abort_timeout_sec": 1, 00:06:06.171 "ack_timeout": 0, 00:06:06.171 "data_wr_pool_size": 0 00:06:06.171 } 00:06:06.171 } 00:06:06.171 ] 00:06:06.171 }, 00:06:06.171 { 00:06:06.171 "subsystem": "iscsi", 00:06:06.171 "config": [ 00:06:06.171 { 00:06:06.171 "method": "iscsi_set_options", 00:06:06.171 "params": { 00:06:06.171 "node_base": "iqn.2016-06.io.spdk", 00:06:06.171 "max_sessions": 128, 00:06:06.171 "max_connections_per_session": 2, 00:06:06.171 "max_queue_depth": 64, 00:06:06.171 "default_time2wait": 2, 00:06:06.171 "default_time2retain": 20, 00:06:06.171 "first_burst_length": 8192, 00:06:06.171 "immediate_data": true, 00:06:06.171 "allow_duplicated_isid": false, 00:06:06.171 "error_recovery_level": 0, 00:06:06.171 "nop_timeout": 60, 00:06:06.171 "nop_in_interval": 30, 00:06:06.171 "disable_chap": false, 00:06:06.171 "require_chap": false, 00:06:06.171 "mutual_chap": false, 00:06:06.171 "chap_group": 0, 00:06:06.171 "max_large_datain_per_connection": 64, 00:06:06.171 "max_r2t_per_connection": 4, 00:06:06.171 "pdu_pool_size": 36864, 00:06:06.171 "immediate_data_pool_size": 16384, 00:06:06.171 "data_out_pool_size": 2048 00:06:06.171 } 00:06:06.171 } 00:06:06.171 ] 00:06:06.171 } 00:06:06.171 ] 00:06:06.171 } 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45751 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45751 ']' 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45751 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45751 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:06.171 killing process with pid 45751 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45751' 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45751 00:06:06.171 06:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45751 00:06:06.430 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45765 00:06:06.430 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:06.430 06:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45765 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45765 ']' 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45765 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45765 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:11.699 killing process with pid 45765 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45765' 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45765 00:06:11.699 06:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45765 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:11.699 00:06:11.699 real 0m6.799s 00:06:11.699 user 0m6.212s 00:06:11.699 sys 0m1.170s 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.699 ************************************ 00:06:11.699 END TEST skip_rpc_with_json 00:06:11.699 ************************************ 00:06:11.699 06:19:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:11.699 06:19:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:11.699 06:19:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.699 06:19:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.699 06:19:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.699 ************************************ 00:06:11.699 START TEST skip_rpc_with_delay 00:06:11.699 ************************************ 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.699 [2024-07-23 06:19:24.202804] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:11.699 [2024-07-23 06:19:24.203036] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.699 00:06:11.699 real 0m0.013s 00:06:11.699 user 0m0.005s 00:06:11.699 sys 0m0.008s 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.699 06:19:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:11.699 ************************************ 00:06:11.699 END TEST skip_rpc_with_delay 00:06:11.699 ************************************ 00:06:11.959 06:19:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:11.959 06:19:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:11.959 06:19:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:06:11.959 06:19:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:11.959 00:06:11.959 real 0m12.493s 00:06:11.959 user 0m11.226s 00:06:11.959 sys 0m1.925s 00:06:11.959 06:19:24 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.959 ************************************ 00:06:11.959 END TEST skip_rpc 00:06:11.959 ************************************ 00:06:11.959 06:19:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.959 06:19:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.959 06:19:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:11.959 06:19:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.959 06:19:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.959 06:19:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.959 ************************************ 00:06:11.959 START TEST rpc_client 00:06:11.959 ************************************ 00:06:11.959 06:19:24 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:11.959 * Looking for test storage... 00:06:11.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:11.959 06:19:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:11.959 OK 00:06:11.959 06:19:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:11.959 00:06:11.959 real 0m0.156s 00:06:11.959 user 0m0.113s 00:06:11.959 sys 0m0.112s 00:06:11.959 06:19:24 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.959 06:19:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:11.959 ************************************ 00:06:11.959 END TEST rpc_client 00:06:11.959 ************************************ 00:06:11.959 06:19:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.959 06:19:24 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:11.959 06:19:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.959 06:19:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.959 06:19:24 -- common/autotest_common.sh@10 -- # set +x 00:06:12.217 ************************************ 00:06:12.217 START TEST json_config 00:06:12.217 ************************************ 00:06:12.217 06:19:24 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.217 06:19:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:12.217 06:19:24 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:06:12.217 06:19:24 json_config -- nvmf/common.sh@7 -- # return 0 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:12.217 06:19:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.218 INFO: JSON configuration test init 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.218 06:19:24 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:12.218 06:19:24 json_config -- json_config/common.sh@9 -- # local app=target 00:06:12.218 06:19:24 json_config -- json_config/common.sh@10 -- # shift 00:06:12.218 06:19:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.218 06:19:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.218 06:19:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.218 06:19:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.218 06:19:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.218 06:19:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45924 00:06:12.218 Waiting for target to run... 00:06:12.218 06:19:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.218 06:19:24 json_config -- json_config/common.sh@25 -- # waitforlisten 45924 /var/tmp/spdk_tgt.sock 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 45924 ']' 00:06:12.218 06:19:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.218 06:19:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.218 [2024-07-23 06:19:24.631340] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:12.218 [2024-07-23 06:19:24.631533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:12.476 EAL: TSC is not safe to use in SMP mode 00:06:12.476 EAL: TSC is not invariant 00:06:12.476 [2024-07-23 06:19:24.915421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.772 [2024-07-23 06:19:25.016827] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:12.772 [2024-07-23 06:19:25.019323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.342 06:19:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.342 00:06:13.342 06:19:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:13.342 06:19:25 json_config -- json_config/common.sh@26 -- # echo '' 00:06:13.342 06:19:25 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:13.342 06:19:25 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:13.342 06:19:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.342 06:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.342 06:19:25 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:13.342 06:19:25 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:13.342 06:19:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.342 06:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.342 06:19:25 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:13.342 06:19:25 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:13.342 06:19:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:13.600 [2024-07-23 06:19:26.056733] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:13.858 06:19:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.858 06:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:13.858 06:19:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:13.858 06:19:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@51 -- # sort 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:14.116 06:19:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.116 06:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:06:14.116 06:19:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.116 06:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:06:14.116 06:19:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:14.116 06:19:26 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:14.375 06:19:26 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:06:14.375 06:19:26 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:14.375 06:19:26 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:14.375 06:19:26 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:06:14.375 06:19:26 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:06:14.375 06:19:26 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:14.375 06:19:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:14.634 Nvme0n1p0 Nvme0n1p1 00:06:14.634 06:19:27 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:14.634 06:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:14.892 [2024-07-23 06:19:27.301986] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:14.892 [2024-07-23 06:19:27.302048] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:14.892 00:06:14.892 06:19:27 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:14.892 06:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:15.150 Malloc3 00:06:15.150 06:19:27 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:15.150 06:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:15.408 [2024-07-23 06:19:27.814013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:15.408 [2024-07-23 06:19:27.814072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.408 [2024-07-23 06:19:27.814100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a2c14e38180 00:06:15.408 [2024-07-23 06:19:27.814109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.408 [2024-07-23 06:19:27.814790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.408 [2024-07-23 06:19:27.814816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:15.408 PTBdevFromMalloc3 00:06:15.408 06:19:27 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:15.408 06:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:15.665 Null0 00:06:15.665 06:19:28 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:15.665 06:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:15.923 Malloc0 00:06:15.923 06:19:28 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:15.923 06:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:16.181 Malloc1 00:06:16.181 06:19:28 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:16.181 06:19:28 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:16.747 102400+0 records in 00:06:16.747 102400+0 records out 00:06:16.747 104857600 bytes transferred in 0.369870 secs (283498780 bytes/sec) 00:06:16.747 06:19:28 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:16.747 06:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:16.747 aio_disk 00:06:16.747 06:19:29 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:16.747 06:19:29 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:16.747 06:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:17.028 84b9143f-48bb-11ef-a06c-59ddad71024c 00:06:17.028 06:19:29 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:17.028 06:19:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:17.028 06:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:17.286 06:19:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:17.286 06:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:17.544 06:19:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:17.544 06:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:17.803 06:19:30 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:17.803 06:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:84ddb480-48bb-11ef-a06c-59ddad71024c bdev_register:8507d205-48bb-11ef-a06c-59ddad71024c bdev_register:852da9a5-48bb-11ef-a06c-59ddad71024c bdev_register:8551ad37-48bb-11ef-a06c-59ddad71024c 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:84ddb480-48bb-11ef-a06c-59ddad71024c bdev_register:8507d205-48bb-11ef-a06c-59ddad71024c bdev_register:852da9a5-48bb-11ef-a06c-59ddad71024c bdev_register:8551ad37-48bb-11ef-a06c-59ddad71024c 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@75 -- # sort 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@76 -- # sort 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:06:18.061 06:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:18.061 06:19:30 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:84ddb480-48bb-11ef-a06c-59ddad71024c 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:8507d205-48bb-11ef-a06c-59ddad71024c 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:852da9a5-48bb-11ef-a06c-59ddad71024c 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:8551ad37-48bb-11ef-a06c-59ddad71024c 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:84ddb480-48bb-11ef-a06c-59ddad71024c bdev_register:8507d205-48bb-11ef-a06c-59ddad71024c bdev_register:852da9a5-48bb-11ef-a06c-59ddad71024c bdev_register:8551ad37-48bb-11ef-a06c-59ddad71024c bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\4\d\d\b\4\8\0\-\4\8\b\b\-\1\1\e\f\-\a\0\6\c\-\5\9\d\d\a\d\7\1\0\2\4\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\5\0\7\d\2\0\5\-\4\8\b\b\-\1\1\e\f\-\a\0\6\c\-\5\9\d\d\a\d\7\1\0\2\4\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\5\2\d\a\9\a\5\-\4\8\b\b\-\1\1\e\f\-\a\0\6\c\-\5\9\d\d\a\d\7\1\0\2\4\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\5\5\1\a\d\3\7\-\4\8\b\b\-\1\1\e\f\-\a\0\6\c\-\5\9\d\d\a\d\7\1\0\2\4\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@90 -- # cat 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:84ddb480-48bb-11ef-a06c-59ddad71024c bdev_register:8507d205-48bb-11ef-a06c-59ddad71024c bdev_register:852da9a5-48bb-11ef-a06c-59ddad71024c bdev_register:8551ad37-48bb-11ef-a06c-59ddad71024c bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:06:18.320 Expected events matched: 00:06:18.320 bdev_register:84ddb480-48bb-11ef-a06c-59ddad71024c 00:06:18.320 bdev_register:8507d205-48bb-11ef-a06c-59ddad71024c 00:06:18.320 bdev_register:852da9a5-48bb-11ef-a06c-59ddad71024c 00:06:18.320 bdev_register:8551ad37-48bb-11ef-a06c-59ddad71024c 00:06:18.320 bdev_register:Malloc0 00:06:18.320 bdev_register:Malloc0p0 00:06:18.320 bdev_register:Malloc0p1 00:06:18.320 bdev_register:Malloc0p2 00:06:18.320 bdev_register:Malloc1 00:06:18.320 bdev_register:Malloc3 00:06:18.320 bdev_register:Null0 00:06:18.320 bdev_register:Nvme0n1 00:06:18.320 bdev_register:Nvme0n1p0 00:06:18.320 bdev_register:Nvme0n1p1 00:06:18.320 bdev_register:PTBdevFromMalloc3 00:06:18.320 bdev_register:aio_disk 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:06:18.320 06:19:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.320 06:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:18.320 06:19:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.320 06:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:18.320 06:19:30 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.320 06:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.579 MallocBdevForConfigChangeCheck 00:06:18.579 06:19:31 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:18.579 06:19:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.579 06:19:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.579 06:19:31 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:18.579 06:19:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.171 INFO: shutting down applications... 00:06:19.171 06:19:31 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:19.171 06:19:31 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:19.171 06:19:31 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:19.171 06:19:31 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:19.171 06:19:31 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:19.171 [2024-07-23 06:19:31.574211] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:19.429 Calling clear_iscsi_subsystem 00:06:19.429 Calling clear_nvmf_subsystem 00:06:19.429 Calling clear_bdev_subsystem 00:06:19.429 06:19:31 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:19.429 06:19:31 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:19.429 06:19:31 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:19.429 06:19:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.429 06:19:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:19.429 06:19:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:19.688 06:19:32 json_config -- json_config/json_config.sh@349 -- # break 00:06:19.688 06:19:32 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:19.688 06:19:32 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:19.688 06:19:32 json_config -- json_config/common.sh@31 -- # local app=target 00:06:19.688 06:19:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.688 06:19:32 json_config -- json_config/common.sh@35 -- # [[ -n 45924 ]] 00:06:19.688 06:19:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45924 00:06:19.688 06:19:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.688 06:19:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.688 06:19:32 json_config -- json_config/common.sh@41 -- # kill -0 45924 00:06:19.688 06:19:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.255 06:19:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.255 06:19:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.255 06:19:32 json_config -- json_config/common.sh@41 -- # kill -0 45924 00:06:20.255 06:19:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.255 06:19:32 json_config -- json_config/common.sh@43 -- # break 00:06:20.255 SPDK target shutdown done 00:06:20.255 06:19:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.255 06:19:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.255 INFO: relaunching applications... 00:06:20.255 06:19:32 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:20.255 06:19:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.255 06:19:32 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.255 06:19:32 json_config -- json_config/common.sh@10 -- # shift 00:06:20.255 06:19:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.255 06:19:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.255 06:19:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.255 06:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.255 06:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.255 06:19:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46115 00:06:20.255 06:19:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.255 Waiting for target to run... 00:06:20.255 06:19:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.255 06:19:32 json_config -- json_config/common.sh@25 -- # waitforlisten 46115 /var/tmp/spdk_tgt.sock 00:06:20.256 06:19:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 46115 ']' 00:06:20.256 06:19:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.256 06:19:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.256 06:19:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.256 06:19:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.256 06:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.256 [2024-07-23 06:19:32.671538] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:20.256 [2024-07-23 06:19:32.671692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:20.514 EAL: TSC is not safe to use in SMP mode 00:06:20.514 EAL: TSC is not invariant 00:06:20.514 [2024-07-23 06:19:32.941435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.772 [2024-07-23 06:19:33.031463] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:20.772 [2024-07-23 06:19:33.033695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.772 [2024-07-23 06:19:33.177071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:20.772 [2024-07-23 06:19:33.177134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:20.772 [2024-07-23 06:19:33.185052] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:20.772 [2024-07-23 06:19:33.185082] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:20.772 [2024-07-23 06:19:33.193073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:20.772 [2024-07-23 06:19:33.193106] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:20.772 [2024-07-23 06:19:33.193116] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:20.772 [2024-07-23 06:19:33.201084] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:06:20.772 [2024-07-23 06:19:33.273818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:20.772 [2024-07-23 06:19:33.273892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.772 [2024-07-23 06:19:33.273903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x203d51637780 00:06:20.772 [2024-07-23 06:19:33.273911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.772 [2024-07-23 06:19:33.274028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.772 [2024-07-23 06:19:33.274041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:21.338 06:19:33 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.338 00:06:21.338 06:19:33 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:21.338 06:19:33 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.338 06:19:33 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:21.338 INFO: Checking if target configuration is the same... 00:06:21.338 06:19:33 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.338 06:19:33 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.FBVfE8 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.338 + '[' 2 -ne 2 ']' 00:06:21.338 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:21.338 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:21.338 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:21.338 +++ basename /tmp//sh-np.FBVfE8 00:06:21.338 ++ mktemp /tmp/sh-np.FBVfE8.XXX 00:06:21.338 + tmp_file_1=/tmp/sh-np.FBVfE8.IvZ 00:06:21.338 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.338 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:21.338 + tmp_file_2=/tmp/spdk_tgt_config.json.BqA 00:06:21.338 + ret=0 00:06:21.338 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:21.338 06:19:33 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:21.338 06:19:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.904 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:21.904 + diff -u /tmp/sh-np.FBVfE8.IvZ /tmp/spdk_tgt_config.json.BqA 00:06:21.904 INFO: JSON config files are the same 00:06:21.904 + echo 'INFO: JSON config files are the same' 00:06:21.904 + rm /tmp/sh-np.FBVfE8.IvZ /tmp/spdk_tgt_config.json.BqA 00:06:21.904 + exit 0 00:06:21.904 06:19:34 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:21.904 INFO: changing configuration and checking if this can be detected... 00:06:21.904 06:19:34 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:21.904 06:19:34 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:21.904 06:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.162 06:19:34 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.PYIF5Q /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.163 + '[' 2 -ne 2 ']' 00:06:22.163 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:22.163 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:22.163 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:22.163 +++ basename /tmp//sh-np.PYIF5Q 00:06:22.163 ++ mktemp /tmp/sh-np.PYIF5Q.XXX 00:06:22.163 + tmp_file_1=/tmp/sh-np.PYIF5Q.TnY 00:06:22.163 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.163 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.163 + tmp_file_2=/tmp/spdk_tgt_config.json.38D 00:06:22.163 + ret=0 00:06:22.163 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:22.163 06:19:34 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:22.163 06:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.731 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:22.731 + diff -u /tmp/sh-np.PYIF5Q.TnY /tmp/spdk_tgt_config.json.38D 00:06:22.731 + ret=1 00:06:22.731 + echo '=== Start of file: /tmp/sh-np.PYIF5Q.TnY ===' 00:06:22.731 + cat /tmp/sh-np.PYIF5Q.TnY 00:06:22.731 + echo '=== End of file: /tmp/sh-np.PYIF5Q.TnY ===' 00:06:22.731 + echo '' 00:06:22.731 + echo '=== Start of file: /tmp/spdk_tgt_config.json.38D ===' 00:06:22.731 + cat /tmp/spdk_tgt_config.json.38D 00:06:22.731 + echo '=== End of file: /tmp/spdk_tgt_config.json.38D ===' 00:06:22.731 + echo '' 00:06:22.731 + rm /tmp/sh-np.PYIF5Q.TnY /tmp/spdk_tgt_config.json.38D 00:06:22.731 + exit 1 00:06:22.731 INFO: configuration change detected. 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:22.731 06:19:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.731 06:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@321 -- # [[ -n 46115 ]] 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.731 06:19:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.731 06:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:06:22.731 06:19:35 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:22.731 06:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:22.990 06:19:35 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:22.990 06:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:23.248 06:19:35 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:23.248 06:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:23.506 06:19:35 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:23.506 06:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:23.506 06:19:36 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:23.506 06:19:36 json_config -- json_config/json_config.sh@197 -- # [[ FreeBSD = Linux ]] 00:06:23.506 06:19:36 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:23.506 06:19:36 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:23.506 06:19:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.506 06:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.765 06:19:36 json_config -- json_config/json_config.sh@327 -- # killprocess 46115 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 46115 ']' 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@952 -- # kill -0 46115 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@953 -- # uname 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46115 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@956 -- # tail -1 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:23.765 killing process with pid 46115 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46115' 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@967 -- # kill 46115 00:06:23.765 06:19:36 json_config -- common/autotest_common.sh@972 -- # wait 46115 00:06:24.023 06:19:36 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:24.023 06:19:36 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:24.024 06:19:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.024 06:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.024 06:19:36 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:24.024 INFO: Success 00:06:24.024 06:19:36 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:24.024 00:06:24.024 real 0m11.884s 00:06:24.024 user 0m18.777s 00:06:24.024 sys 0m1.889s 00:06:24.024 06:19:36 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.024 ************************************ 00:06:24.024 06:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.024 END TEST json_config 00:06:24.024 ************************************ 00:06:24.024 06:19:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.024 06:19:36 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:24.024 06:19:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.024 06:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.024 06:19:36 -- common/autotest_common.sh@10 -- # set +x 00:06:24.024 ************************************ 00:06:24.024 START TEST json_config_extra_key 00:06:24.024 ************************************ 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:24.024 06:19:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:24.024 06:19:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:06:24.024 06:19:36 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:24.024 INFO: launching applications... 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:24.024 06:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46248 00:06:24.024 Waiting for target to run... 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46248 /var/tmp/spdk_tgt.sock 00:06:24.024 06:19:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46248 ']' 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.024 06:19:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:24.024 [2024-07-23 06:19:36.537556] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:24.024 [2024-07-23 06:19:36.537719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:24.592 EAL: TSC is not safe to use in SMP mode 00:06:24.592 EAL: TSC is not invariant 00:06:24.592 [2024-07-23 06:19:36.832246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.592 [2024-07-23 06:19:36.940532] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:24.592 [2024-07-23 06:19:36.943369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.160 06:19:37 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.160 00:06:25.160 06:19:37 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:25.160 INFO: shutting down applications... 00:06:25.160 06:19:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:25.160 06:19:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46248 ]] 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46248 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46248 00:06:25.160 06:19:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46248 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:25.727 SPDK target shutdown done 00:06:25.727 06:19:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:25.727 Success 00:06:25.727 06:19:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:25.727 00:06:25.727 real 0m1.778s 00:06:25.727 user 0m1.669s 00:06:25.727 sys 0m0.460s 00:06:25.727 06:19:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.727 ************************************ 00:06:25.727 END TEST json_config_extra_key 00:06:25.727 ************************************ 00:06:25.727 06:19:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.727 06:19:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.727 06:19:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.727 06:19:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.727 06:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.727 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:06:25.727 ************************************ 00:06:25.727 START TEST alias_rpc 00:06:25.727 ************************************ 00:06:25.727 06:19:38 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.985 * Looking for test storage... 00:06:25.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:25.985 06:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.985 06:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46306 00:06:25.985 06:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46306 00:06:25.985 06:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.985 06:19:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46306 ']' 00:06:25.985 06:19:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.985 06:19:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.985 06:19:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.985 06:19:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.985 06:19:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.985 [2024-07-23 06:19:38.392292] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:25.985 [2024-07-23 06:19:38.392505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:26.564 EAL: TSC is not safe to use in SMP mode 00:06:26.564 EAL: TSC is not invariant 00:06:26.564 [2024-07-23 06:19:38.913015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.564 [2024-07-23 06:19:39.013221] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:26.564 [2024-07-23 06:19:39.015736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.131 06:19:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.131 06:19:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.131 06:19:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:27.390 06:19:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46306 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46306 ']' 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46306 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46306 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:27.390 killing process with pid 46306 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46306' 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@967 -- # kill 46306 00:06:27.390 06:19:39 alias_rpc -- common/autotest_common.sh@972 -- # wait 46306 00:06:27.649 00:06:27.649 real 0m1.905s 00:06:27.649 user 0m2.136s 00:06:27.649 sys 0m0.750s 00:06:27.649 ************************************ 00:06:27.649 END TEST alias_rpc 00:06:27.649 ************************************ 00:06:27.649 06:19:40 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.649 06:19:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 06:19:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.908 06:19:40 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:27.908 06:19:40 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:27.908 06:19:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.908 06:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.908 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 ************************************ 00:06:27.908 START TEST spdkcli_tcp 00:06:27.908 ************************************ 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:27.908 * Looking for test storage... 00:06:27.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46367 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:27.908 06:19:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46367 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46367 ']' 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.908 06:19:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 [2024-07-23 06:19:40.353092] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:27.908 [2024-07-23 06:19:40.353291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:28.844 EAL: TSC is not safe to use in SMP mode 00:06:28.844 EAL: TSC is not invariant 00:06:28.844 [2024-07-23 06:19:41.026835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.844 [2024-07-23 06:19:41.112796] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:28.844 [2024-07-23 06:19:41.112853] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:28.844 [2024-07-23 06:19:41.115491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.844 [2024-07-23 06:19:41.115480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.102 06:19:41 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.102 06:19:41 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:29.102 06:19:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46375 00:06:29.102 06:19:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.102 06:19:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:29.361 [ 00:06:29.361 "spdk_get_version", 00:06:29.361 "rpc_get_methods", 00:06:29.361 "env_dpdk_get_mem_stats", 00:06:29.361 "trace_get_info", 00:06:29.361 "trace_get_tpoint_group_mask", 00:06:29.361 "trace_disable_tpoint_group", 00:06:29.361 "trace_enable_tpoint_group", 00:06:29.361 "trace_clear_tpoint_mask", 00:06:29.361 "trace_set_tpoint_mask", 00:06:29.361 "notify_get_notifications", 00:06:29.361 "notify_get_types", 00:06:29.361 "accel_get_stats", 00:06:29.361 "accel_set_options", 00:06:29.361 "accel_set_driver", 00:06:29.361 "accel_crypto_key_destroy", 00:06:29.361 "accel_crypto_keys_get", 00:06:29.361 "accel_crypto_key_create", 00:06:29.361 "accel_assign_opc", 00:06:29.361 "accel_get_module_info", 00:06:29.361 "accel_get_opc_assignments", 00:06:29.361 "bdev_get_histogram", 00:06:29.361 "bdev_enable_histogram", 00:06:29.361 "bdev_set_qos_limit", 00:06:29.361 "bdev_set_qd_sampling_period", 00:06:29.361 "bdev_get_bdevs", 00:06:29.361 "bdev_reset_iostat", 00:06:29.361 "bdev_get_iostat", 00:06:29.361 "bdev_examine", 00:06:29.361 "bdev_wait_for_examine", 00:06:29.361 "bdev_set_options", 00:06:29.361 "keyring_get_keys", 00:06:29.361 "framework_get_pci_devices", 00:06:29.361 "framework_get_config", 00:06:29.361 "framework_get_subsystems", 00:06:29.361 "sock_get_default_impl", 00:06:29.361 "sock_set_default_impl", 00:06:29.361 "sock_impl_set_options", 00:06:29.361 "sock_impl_get_options", 00:06:29.361 "thread_set_cpumask", 00:06:29.361 "framework_get_governor", 00:06:29.361 "framework_get_scheduler", 00:06:29.361 "framework_set_scheduler", 00:06:29.361 "framework_get_reactors", 00:06:29.361 "thread_get_io_channels", 00:06:29.361 "thread_get_pollers", 00:06:29.361 "thread_get_stats", 00:06:29.361 "framework_monitor_context_switch", 00:06:29.361 "spdk_kill_instance", 00:06:29.361 "log_enable_timestamps", 00:06:29.361 "log_get_flags", 00:06:29.361 "log_clear_flag", 00:06:29.361 "log_set_flag", 00:06:29.361 "log_get_level", 00:06:29.361 "log_set_level", 00:06:29.361 "log_get_print_level", 00:06:29.361 "log_set_print_level", 00:06:29.361 "framework_enable_cpumask_locks", 00:06:29.361 "framework_disable_cpumask_locks", 00:06:29.361 "framework_wait_init", 00:06:29.361 "framework_start_init", 00:06:29.361 "iobuf_get_stats", 00:06:29.361 "iobuf_set_options", 00:06:29.361 "vmd_rescan", 00:06:29.361 "vmd_remove_device", 00:06:29.361 "vmd_enable", 00:06:29.361 "nvmf_stop_mdns_prr", 00:06:29.361 "nvmf_publish_mdns_prr", 00:06:29.361 "nvmf_subsystem_get_listeners", 00:06:29.361 "nvmf_subsystem_get_qpairs", 00:06:29.361 "nvmf_subsystem_get_controllers", 00:06:29.361 "nvmf_get_stats", 00:06:29.361 "nvmf_get_transports", 00:06:29.361 "nvmf_create_transport", 00:06:29.361 "nvmf_get_targets", 00:06:29.361 "nvmf_delete_target", 00:06:29.361 "nvmf_create_target", 00:06:29.362 "nvmf_subsystem_allow_any_host", 00:06:29.362 "nvmf_subsystem_remove_host", 00:06:29.362 "nvmf_subsystem_add_host", 00:06:29.362 "nvmf_ns_remove_host", 00:06:29.362 "nvmf_ns_add_host", 00:06:29.362 "nvmf_subsystem_remove_ns", 00:06:29.362 "nvmf_subsystem_add_ns", 00:06:29.362 "nvmf_subsystem_listener_set_ana_state", 00:06:29.362 "nvmf_discovery_get_referrals", 00:06:29.362 "nvmf_discovery_remove_referral", 00:06:29.362 "nvmf_discovery_add_referral", 00:06:29.362 "nvmf_subsystem_remove_listener", 00:06:29.362 "nvmf_subsystem_add_listener", 00:06:29.362 "nvmf_delete_subsystem", 00:06:29.362 "nvmf_create_subsystem", 00:06:29.362 "nvmf_get_subsystems", 00:06:29.362 "nvmf_set_crdt", 00:06:29.362 "nvmf_set_config", 00:06:29.362 "nvmf_set_max_subsystems", 00:06:29.362 "scsi_get_devices", 00:06:29.362 "iscsi_get_histogram", 00:06:29.362 "iscsi_enable_histogram", 00:06:29.362 "iscsi_set_options", 00:06:29.362 "iscsi_get_auth_groups", 00:06:29.362 "iscsi_auth_group_remove_secret", 00:06:29.362 "iscsi_auth_group_add_secret", 00:06:29.362 "iscsi_delete_auth_group", 00:06:29.362 "iscsi_create_auth_group", 00:06:29.362 "iscsi_set_discovery_auth", 00:06:29.362 "iscsi_get_options", 00:06:29.362 "iscsi_target_node_request_logout", 00:06:29.362 "iscsi_target_node_set_redirect", 00:06:29.362 "iscsi_target_node_set_auth", 00:06:29.362 "iscsi_target_node_add_lun", 00:06:29.362 "iscsi_get_stats", 00:06:29.362 "iscsi_get_connections", 00:06:29.362 "iscsi_portal_group_set_auth", 00:06:29.362 "iscsi_start_portal_group", 00:06:29.362 "iscsi_delete_portal_group", 00:06:29.362 "iscsi_create_portal_group", 00:06:29.362 "iscsi_get_portal_groups", 00:06:29.362 "iscsi_delete_target_node", 00:06:29.362 "iscsi_target_node_remove_pg_ig_maps", 00:06:29.362 "iscsi_target_node_add_pg_ig_maps", 00:06:29.362 "iscsi_create_target_node", 00:06:29.362 "iscsi_get_target_nodes", 00:06:29.362 "iscsi_delete_initiator_group", 00:06:29.362 "iscsi_initiator_group_remove_initiators", 00:06:29.362 "iscsi_initiator_group_add_initiators", 00:06:29.362 "iscsi_create_initiator_group", 00:06:29.362 "iscsi_get_initiator_groups", 00:06:29.362 "keyring_file_remove_key", 00:06:29.362 "keyring_file_add_key", 00:06:29.362 "iaa_scan_accel_module", 00:06:29.362 "dsa_scan_accel_module", 00:06:29.362 "ioat_scan_accel_module", 00:06:29.362 "accel_error_inject_error", 00:06:29.362 "bdev_aio_delete", 00:06:29.362 "bdev_aio_rescan", 00:06:29.362 "bdev_aio_create", 00:06:29.362 "blobfs_create", 00:06:29.362 "blobfs_detect", 00:06:29.362 "blobfs_set_cache_size", 00:06:29.362 "bdev_zone_block_delete", 00:06:29.362 "bdev_zone_block_create", 00:06:29.362 "bdev_delay_delete", 00:06:29.362 "bdev_delay_create", 00:06:29.362 "bdev_delay_update_latency", 00:06:29.362 "bdev_split_delete", 00:06:29.362 "bdev_split_create", 00:06:29.362 "bdev_error_inject_error", 00:06:29.362 "bdev_error_delete", 00:06:29.362 "bdev_error_create", 00:06:29.362 "bdev_raid_set_options", 00:06:29.362 "bdev_raid_remove_base_bdev", 00:06:29.362 "bdev_raid_add_base_bdev", 00:06:29.362 "bdev_raid_delete", 00:06:29.362 "bdev_raid_create", 00:06:29.362 "bdev_raid_get_bdevs", 00:06:29.362 "bdev_lvol_set_parent_bdev", 00:06:29.362 "bdev_lvol_set_parent", 00:06:29.362 "bdev_lvol_check_shallow_copy", 00:06:29.362 "bdev_lvol_start_shallow_copy", 00:06:29.362 "bdev_lvol_grow_lvstore", 00:06:29.362 "bdev_lvol_get_lvols", 00:06:29.362 "bdev_lvol_get_lvstores", 00:06:29.362 "bdev_lvol_delete", 00:06:29.362 "bdev_lvol_set_read_only", 00:06:29.362 "bdev_lvol_resize", 00:06:29.362 "bdev_lvol_decouple_parent", 00:06:29.362 "bdev_lvol_inflate", 00:06:29.362 "bdev_lvol_rename", 00:06:29.362 "bdev_lvol_clone_bdev", 00:06:29.362 "bdev_lvol_clone", 00:06:29.362 "bdev_lvol_snapshot", 00:06:29.362 "bdev_lvol_create", 00:06:29.362 "bdev_lvol_delete_lvstore", 00:06:29.362 "bdev_lvol_rename_lvstore", 00:06:29.362 "bdev_lvol_create_lvstore", 00:06:29.362 "bdev_passthru_delete", 00:06:29.362 "bdev_passthru_create", 00:06:29.362 "bdev_nvme_send_cmd", 00:06:29.362 "bdev_nvme_get_path_iostat", 00:06:29.362 "bdev_nvme_get_mdns_discovery_info", 00:06:29.362 "bdev_nvme_stop_mdns_discovery", 00:06:29.362 "bdev_nvme_start_mdns_discovery", 00:06:29.362 "bdev_nvme_set_multipath_policy", 00:06:29.362 "bdev_nvme_set_preferred_path", 00:06:29.362 "bdev_nvme_get_io_paths", 00:06:29.362 "bdev_nvme_remove_error_injection", 00:06:29.362 "bdev_nvme_add_error_injection", 00:06:29.362 "bdev_nvme_get_discovery_info", 00:06:29.362 "bdev_nvme_stop_discovery", 00:06:29.362 "bdev_nvme_start_discovery", 00:06:29.362 "bdev_nvme_get_controller_health_info", 00:06:29.362 "bdev_nvme_disable_controller", 00:06:29.362 "bdev_nvme_enable_controller", 00:06:29.362 "bdev_nvme_reset_controller", 00:06:29.362 "bdev_nvme_get_transport_statistics", 00:06:29.362 "bdev_nvme_apply_firmware", 00:06:29.362 "bdev_nvme_detach_controller", 00:06:29.362 "bdev_nvme_get_controllers", 00:06:29.362 "bdev_nvme_attach_controller", 00:06:29.362 "bdev_nvme_set_hotplug", 00:06:29.362 "bdev_nvme_set_options", 00:06:29.362 "bdev_null_resize", 00:06:29.362 "bdev_null_delete", 00:06:29.362 "bdev_null_create", 00:06:29.362 "bdev_malloc_delete", 00:06:29.362 "bdev_malloc_create" 00:06:29.362 ] 00:06:29.362 06:19:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.362 06:19:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:29.362 06:19:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46367 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46367 ']' 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46367 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46367 00:06:29.362 06:19:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:06:29.363 06:19:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:29.363 killing process with pid 46367 00:06:29.363 06:19:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:29.363 06:19:41 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46367' 00:06:29.363 06:19:41 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46367 00:06:29.363 06:19:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46367 00:06:29.621 00:06:29.621 real 0m1.807s 00:06:29.621 user 0m2.548s 00:06:29.621 sys 0m0.935s 00:06:29.621 06:19:41 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.621 ************************************ 00:06:29.621 END TEST spdkcli_tcp 00:06:29.621 ************************************ 00:06:29.621 06:19:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.621 06:19:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.621 06:19:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.621 06:19:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.621 06:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.621 06:19:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.621 ************************************ 00:06:29.621 START TEST dpdk_mem_utility 00:06:29.621 ************************************ 00:06:29.621 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.880 * Looking for test storage... 00:06:29.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:29.880 06:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:29.880 06:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46446 00:06:29.880 06:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46446 00:06:29.880 06:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.880 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46446 ']' 00:06:29.880 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.880 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.880 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.880 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.880 06:19:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.880 [2024-07-23 06:19:42.166785] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:29.880 [2024-07-23 06:19:42.166994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:30.446 EAL: TSC is not safe to use in SMP mode 00:06:30.446 EAL: TSC is not invariant 00:06:30.446 [2024-07-23 06:19:42.722370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.446 [2024-07-23 06:19:42.810154] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:30.446 [2024-07-23 06:19:42.812395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.704 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.704 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:30.705 06:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:30.705 06:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:30.705 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.705 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.983 { 00:06:30.983 "filename": "/tmp/spdk_mem_dump.txt" 00:06:30.983 } 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.983 06:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:30.983 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:06:30.983 1 heaps totaling size 2048.000000 MiB 00:06:30.983 size: 2048.000000 MiB heap id: 0 00:06:30.983 end heaps---------- 00:06:30.983 8 mempools totaling size 592.563660 MiB 00:06:30.983 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:06:30.983 size: 153.489014 MiB name: PDU_data_out_Pool 00:06:30.983 size: 84.500549 MiB name: bdev_io_46446 00:06:30.983 size: 51.008362 MiB name: evtpool_46446 00:06:30.983 size: 50.000549 MiB name: msgpool_46446 00:06:30.983 size: 21.758911 MiB name: PDU_Pool 00:06:30.983 size: 19.508911 MiB name: SCSI_TASK_Pool 00:06:30.983 size: 0.026123 MiB name: Session_Pool 00:06:30.983 end mempools------- 00:06:30.983 6 memzones totaling size 4.142822 MiB 00:06:30.983 size: 1.000366 MiB name: RG_ring_0_46446 00:06:30.983 size: 1.000366 MiB name: RG_ring_1_46446 00:06:30.983 size: 1.000366 MiB name: RG_ring_4_46446 00:06:30.983 size: 1.000366 MiB name: RG_ring_5_46446 00:06:30.983 size: 0.125366 MiB name: RG_ring_2_46446 00:06:30.983 size: 0.015991 MiB name: RG_ring_3_46446 00:06:30.983 end memzones------- 00:06:30.983 06:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:30.983 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:06:30.983 list of free elements. size: 1254.071899 MiB 00:06:30.983 element at address: 0x1060000000 with size: 1254.001099 MiB 00:06:30.983 element at address: 0x10c8000000 with size: 0.070129 MiB 00:06:30.983 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:06:30.983 list of standard malloc elements. size: 197.217957 MiB 00:06:30.983 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:06:30.983 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:06:30.983 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:06:30.983 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:06:30.983 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:06:30.983 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:06:30.983 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:06:30.983 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:06:30.983 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:06:30.983 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:06:30.983 list of memzone associated elements. size: 596.710144 MiB 00:06:30.983 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:06:30.983 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:06:30.983 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:06:30.983 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:06:30.983 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:06:30.983 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46446_0 00:06:30.983 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:06:30.983 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46446_0 00:06:30.983 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:06:30.983 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46446_0 00:06:30.983 element at address: 0x10c683d780 with size: 20.250671 MiB 00:06:30.983 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:06:30.983 element at address: 0x10ae700680 with size: 18.000671 MiB 00:06:30.983 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:06:30.983 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:06:30.983 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46446 00:06:30.983 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:06:30.983 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46446 00:06:30.983 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:06:30.983 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46446 00:06:30.983 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:06:30.983 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:30.983 element at address: 0x10c673b640 with size: 1.008118 MiB 00:06:30.983 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:30.983 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:06:30.983 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:30.983 element at address: 0x10af980b40 with size: 1.008118 MiB 00:06:30.983 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:30.983 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:06:30.983 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46446 00:06:30.983 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:06:30.983 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46446 00:06:30.983 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:06:30.983 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46446 00:06:30.983 element at address: 0x10ae600480 with size: 1.000488 MiB 00:06:30.983 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46446 00:06:30.983 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:06:30.983 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46446 00:06:30.983 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:06:30.983 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:30.983 element at address: 0x10af900940 with size: 0.500488 MiB 00:06:30.983 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:30.983 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:06:30.983 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:30.983 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:06:30.983 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46446 00:06:30.983 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:06:30.983 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:30.983 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:06:30.983 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:30.983 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:06:30.983 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46446 00:06:30.983 element at address: 0x10c8018080 with size: 0.002441 MiB 00:06:30.983 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:30.983 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:06:30.983 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46446 00:06:30.983 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:06:30.983 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46446 00:06:30.983 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:06:30.983 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:30.983 06:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:30.983 06:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46446 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46446 ']' 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46446 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46446 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46446' 00:06:30.983 killing process with pid 46446 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46446 00:06:30.983 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46446 00:06:31.243 00:06:31.243 real 0m1.585s 00:06:31.243 user 0m1.541s 00:06:31.243 sys 0m0.748s 00:06:31.243 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.243 ************************************ 00:06:31.243 END TEST dpdk_mem_utility 00:06:31.243 ************************************ 00:06:31.243 06:19:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.243 06:19:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.243 06:19:43 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:31.244 06:19:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.244 06:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.244 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:06:31.244 ************************************ 00:06:31.244 START TEST event 00:06:31.244 ************************************ 00:06:31.244 06:19:43 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:31.244 * Looking for test storage... 00:06:31.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:31.502 06:19:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.502 06:19:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.502 06:19:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.502 06:19:43 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:31.502 06:19:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.502 06:19:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.502 ************************************ 00:06:31.502 START TEST event_perf 00:06:31.502 ************************************ 00:06:31.502 06:19:43 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.502 Running I/O for 1 seconds...[2024-07-23 06:19:43.779822] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:31.502 [2024-07-23 06:19:43.780048] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:32.130 EAL: TSC is not safe to use in SMP mode 00:06:32.130 EAL: TSC is not invariant 00:06:32.130 [2024-07-23 06:19:44.360801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.130 [2024-07-23 06:19:44.462379] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:32.130 [2024-07-23 06:19:44.462446] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:32.130 [2024-07-23 06:19:44.462459] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:06:32.130 [2024-07-23 06:19:44.462470] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:06:32.130 [2024-07-23 06:19:44.466958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.130 [2024-07-23 06:19:44.467180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.130 [2024-07-23 06:19:44.467061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.130 Running I/O for 1 seconds...[2024-07-23 06:19:44.467173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.064 00:06:33.064 lcore 0: 2552859 00:06:33.064 lcore 1: 2552859 00:06:33.064 lcore 2: 2552858 00:06:33.064 lcore 3: 2552857 00:06:33.322 done. 00:06:33.322 00:06:33.322 real 0m1.809s 00:06:33.322 user 0m4.183s 00:06:33.322 sys 0m0.620s 00:06:33.322 06:19:45 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.322 ************************************ 00:06:33.322 END TEST event_perf 00:06:33.322 ************************************ 00:06:33.322 06:19:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.322 06:19:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:33.322 06:19:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:33.322 06:19:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.322 06:19:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.322 06:19:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.322 ************************************ 00:06:33.322 START TEST event_reactor 00:06:33.322 ************************************ 00:06:33.322 06:19:45 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:33.322 [2024-07-23 06:19:45.632567] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:33.322 [2024-07-23 06:19:45.632861] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:33.889 EAL: TSC is not safe to use in SMP mode 00:06:33.889 EAL: TSC is not invariant 00:06:33.889 [2024-07-23 06:19:46.147361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.889 [2024-07-23 06:19:46.247895] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:33.889 [2024-07-23 06:19:46.250385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.265 test_start 00:06:35.265 oneshot 00:06:35.265 tick 100 00:06:35.265 tick 100 00:06:35.265 tick 250 00:06:35.265 tick 100 00:06:35.265 tick 100 00:06:35.265 tick 100 00:06:35.265 tick 250 00:06:35.265 tick 500 00:06:35.265 tick 100 00:06:35.265 tick 100 00:06:35.265 tick 250 00:06:35.265 tick 100 00:06:35.265 tick 100 00:06:35.265 test_end 00:06:35.265 00:06:35.265 real 0m1.745s 00:06:35.265 user 0m1.192s 00:06:35.265 sys 0m0.550s 00:06:35.265 06:19:47 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.265 06:19:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:35.265 ************************************ 00:06:35.265 END TEST event_reactor 00:06:35.265 ************************************ 00:06:35.265 06:19:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.265 06:19:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:35.265 06:19:47 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:35.265 06:19:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.265 06:19:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.265 ************************************ 00:06:35.265 START TEST event_reactor_perf 00:06:35.265 ************************************ 00:06:35.265 06:19:47 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:35.265 [2024-07-23 06:19:47.417859] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:35.265 [2024-07-23 06:19:47.418120] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:35.524 EAL: TSC is not safe to use in SMP mode 00:06:35.524 EAL: TSC is not invariant 00:06:35.524 [2024-07-23 06:19:47.931218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.524 [2024-07-23 06:19:48.017855] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:35.524 [2024-07-23 06:19:48.019934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.910 test_start 00:06:36.910 test_end 00:06:36.910 Performance: 3566430 events per second 00:06:36.910 00:06:36.910 real 0m1.728s 00:06:36.910 user 0m1.190s 00:06:36.910 sys 0m0.535s 00:06:36.910 06:19:49 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.910 06:19:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.910 ************************************ 00:06:36.910 END TEST event_reactor_perf 00:06:36.910 ************************************ 00:06:36.910 06:19:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.910 06:19:49 event -- event/event.sh@49 -- # uname -s 00:06:36.910 06:19:49 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:06:36.910 00:06:36.910 real 0m5.529s 00:06:36.910 user 0m6.703s 00:06:36.910 sys 0m1.841s 00:06:36.910 06:19:49 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.910 ************************************ 00:06:36.910 END TEST event 00:06:36.910 06:19:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.910 ************************************ 00:06:36.910 06:19:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.910 06:19:49 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.910 06:19:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.910 06:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.910 06:19:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.910 ************************************ 00:06:36.910 START TEST thread 00:06:36.910 ************************************ 00:06:36.910 06:19:49 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.910 * Looking for test storage... 00:06:36.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:36.910 06:19:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.911 06:19:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.911 06:19:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.911 06:19:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.911 ************************************ 00:06:36.911 START TEST thread_poller_perf 00:06:36.911 ************************************ 00:06:36.911 06:19:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.911 [2024-07-23 06:19:49.372999] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:36.911 [2024-07-23 06:19:49.373213] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:37.476 EAL: TSC is not safe to use in SMP mode 00:06:37.476 EAL: TSC is not invariant 00:06:37.476 [2024-07-23 06:19:49.902823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.476 [2024-07-23 06:19:49.988657] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:37.476 [2024-07-23 06:19:49.990997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.476 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:38.853 ====================================== 00:06:38.853 busy:2201998689 (cyc) 00:06:38.853 total_run_count: 5664000 00:06:38.853 tsc_hz: 2199994391 (cyc) 00:06:38.853 ====================================== 00:06:38.853 poller_cost: 388 (cyc), 176 (nsec) 00:06:38.853 00:06:38.853 real 0m1.739s 00:06:38.853 user 0m1.180s 00:06:38.853 sys 0m0.556s 00:06:38.853 06:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.853 06:19:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.853 ************************************ 00:06:38.853 END TEST thread_poller_perf 00:06:38.853 ************************************ 00:06:38.853 06:19:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:38.853 06:19:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.853 06:19:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:38.853 06:19:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.853 06:19:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.853 ************************************ 00:06:38.853 START TEST thread_poller_perf 00:06:38.853 ************************************ 00:06:38.854 06:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.854 [2024-07-23 06:19:51.159241] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:38.854 [2024-07-23 06:19:51.159445] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:39.418 EAL: TSC is not safe to use in SMP mode 00:06:39.418 EAL: TSC is not invariant 00:06:39.418 [2024-07-23 06:19:51.691732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.418 [2024-07-23 06:19:51.774827] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:39.418 [2024-07-23 06:19:51.776940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.418 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.790 ====================================== 00:06:40.790 busy:2201135530 (cyc) 00:06:40.790 total_run_count: 71378000 00:06:40.790 tsc_hz: 2199994391 (cyc) 00:06:40.790 ====================================== 00:06:40.790 poller_cost: 30 (cyc), 13 (nsec) 00:06:40.790 00:06:40.790 real 0m1.742s 00:06:40.790 user 0m1.177s 00:06:40.790 sys 0m0.561s 00:06:40.790 06:19:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.790 06:19:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.790 ************************************ 00:06:40.790 END TEST thread_poller_perf 00:06:40.790 ************************************ 00:06:40.790 06:19:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:40.790 06:19:52 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:40.790 06:19:52 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:06:40.790 06:19:52 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.790 06:19:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.790 06:19:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.790 ************************************ 00:06:40.790 START TEST thread_spdk_lock 00:06:40.790 ************************************ 00:06:40.790 06:19:52 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:06:40.790 [2024-07-23 06:19:52.940766] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:40.790 [2024-07-23 06:19:52.940943] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:41.047 EAL: TSC is not safe to use in SMP mode 00:06:41.047 EAL: TSC is not invariant 00:06:41.047 [2024-07-23 06:19:53.472883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.047 [2024-07-23 06:19:53.560668] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:41.047 [2024-07-23 06:19:53.560744] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:41.047 [2024-07-23 06:19:53.563600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.305 [2024-07-23 06:19:53.563593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.563 [2024-07-23 06:19:53.999511] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:41.563 [2024-07-23 06:19:53.999576] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:41.563 [2024-07-23 06:19:53.999592] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x316d60 00:06:41.563 [2024-07-23 06:19:54.000151] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:41.563 [2024-07-23 06:19:54.000251] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:41.563 [2024-07-23 06:19:54.000265] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:41.821 Starting test contend 00:06:41.821 Worker Delay Wait us Hold us Total us 00:06:41.821 0 3 257812 163144 420957 00:06:41.821 1 5 161742 262875 424617 00:06:41.821 PASS test contend 00:06:41.821 Starting test hold_by_poller 00:06:41.821 PASS test hold_by_poller 00:06:41.821 Starting test hold_by_message 00:06:41.821 PASS test hold_by_message 00:06:41.821 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:06:41.821 100014 assertions passed 00:06:41.821 0 assertions failed 00:06:41.821 00:06:41.821 real 0m1.182s 00:06:41.821 user 0m1.038s 00:06:41.821 sys 0m0.567s 00:06:41.821 06:19:54 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.821 06:19:54 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:41.821 ************************************ 00:06:41.821 END TEST thread_spdk_lock 00:06:41.821 ************************************ 00:06:41.821 06:19:54 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:41.821 00:06:41.821 real 0m4.933s 00:06:41.821 user 0m3.545s 00:06:41.821 sys 0m1.860s 00:06:41.821 06:19:54 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.821 06:19:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.821 ************************************ 00:06:41.821 END TEST thread 00:06:41.821 ************************************ 00:06:41.821 06:19:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.821 06:19:54 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:41.821 06:19:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.821 06:19:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.821 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.821 ************************************ 00:06:41.821 START TEST accel 00:06:41.821 ************************************ 00:06:41.821 06:19:54 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:41.821 * Looking for test storage... 00:06:41.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:41.821 06:19:54 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:41.821 06:19:54 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:41.821 06:19:54 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:41.821 06:19:54 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46750 00:06:41.821 06:19:54 accel -- accel/accel.sh@63 -- # waitforlisten 46750 00:06:41.822 06:19:54 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.2v2OLg 00:06:41.822 06:19:54 accel -- common/autotest_common.sh@829 -- # '[' -z 46750 ']' 00:06:41.822 06:19:54 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.822 06:19:54 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.822 06:19:54 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.822 06:19:54 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.822 06:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.081 [2024-07-23 06:19:54.340535] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:42.081 [2024-07-23 06:19:54.340842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:42.647 EAL: TSC is not safe to use in SMP mode 00:06:42.647 EAL: TSC is not invariant 00:06:42.647 [2024-07-23 06:19:54.892465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.647 [2024-07-23 06:19:54.979882] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:42.647 06:19:54 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:42.647 06:19:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.647 06:19:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.647 06:19:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.647 06:19:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.647 06:19:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.647 06:19:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:42.647 06:19:54 accel -- accel/accel.sh@41 -- # jq -r . 00:06:42.647 [2024-07-23 06:19:54.988925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@862 -- # return 0 00:06:42.906 06:19:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:42.906 06:19:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:42.906 06:19:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:42.906 06:19:55 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:42.906 06:19:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:42.906 06:19:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.906 06:19:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.906 06:19:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.906 06:19:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.906 06:19:55 accel -- accel/accel.sh@75 -- # killprocess 46750 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@948 -- # '[' -z 46750 ']' 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@952 -- # kill -0 46750 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@953 -- # uname 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46750 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@956 -- # tail -1 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46750' 00:06:42.906 killing process with pid 46750 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@967 -- # kill 46750 00:06:42.906 06:19:55 accel -- common/autotest_common.sh@972 -- # wait 46750 00:06:43.163 06:19:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:43.163 06:19:55 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:43.163 06:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:43.163 06:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.163 06:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.163 06:19:55 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:43.163 06:19:55 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.X0OVUs -h 00:06:43.163 06:19:55 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.163 06:19:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:43.421 06:19:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.421 06:19:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:43.421 06:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:43.421 06:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.421 06:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.421 ************************************ 00:06:43.421 START TEST accel_missing_filename 00:06:43.421 ************************************ 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.421 06:19:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:43.421 06:19:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.4Rfuv6 -t 1 -w compress 00:06:43.421 [2024-07-23 06:19:55.709705] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:43.421 [2024-07-23 06:19:55.709923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:43.987 EAL: TSC is not safe to use in SMP mode 00:06:43.987 EAL: TSC is not invariant 00:06:43.987 [2024-07-23 06:19:56.246439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.987 [2024-07-23 06:19:56.339205] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:43.987 06:19:56 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:43.987 [2024-07-23 06:19:56.349682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.987 [2024-07-23 06:19:56.352637] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.987 [2024-07-23 06:19:56.389283] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:44.250 A filename is required. 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.250 00:06:44.250 real 0m0.813s 00:06:44.250 user 0m0.249s 00:06:44.250 sys 0m0.563s 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.250 06:19:56 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:44.250 ************************************ 00:06:44.250 END TEST accel_missing_filename 00:06:44.250 ************************************ 00:06:44.250 06:19:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.250 06:19:56 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.250 06:19:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:44.250 06:19:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.250 06:19:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.250 ************************************ 00:06:44.250 START TEST accel_compress_verify 00:06:44.250 ************************************ 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.250 06:19:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.250 06:19:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.QdGyDq -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.250 [2024-07-23 06:19:56.564594] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:44.250 [2024-07-23 06:19:56.564849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:44.816 EAL: TSC is not safe to use in SMP mode 00:06:44.816 EAL: TSC is not invariant 00:06:44.816 [2024-07-23 06:19:57.088425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.816 [2024-07-23 06:19:57.176892] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:44.816 06:19:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:44.816 [2024-07-23 06:19:57.187749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.816 [2024-07-23 06:19:57.190236] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.816 [2024-07-23 06:19:57.225820] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:45.075 00:06:45.076 Compression does not support the verify option, aborting. 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.076 00:06:45.076 real 0m0.793s 00:06:45.076 user 0m0.216s 00:06:45.076 sys 0m0.580s 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.076 06:19:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 ************************************ 00:06:45.076 END TEST accel_compress_verify 00:06:45.076 ************************************ 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.076 06:19:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 ************************************ 00:06:45.076 START TEST accel_wrong_workload 00:06:45.076 ************************************ 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:45.076 06:19:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vPGgrF -t 1 -w foobar 00:06:45.076 Unsupported workload type: foobar 00:06:45.076 [2024-07-23 06:19:57.402311] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:45.076 accel_perf options: 00:06:45.076 [-h help message] 00:06:45.076 [-q queue depth per core] 00:06:45.076 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.076 [-T number of threads per core 00:06:45.076 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.076 [-t time in seconds] 00:06:45.076 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.076 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:45.076 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.076 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.076 [-S for crc32c workload, use this seed value (default 0) 00:06:45.076 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.076 [-f for fill workload, use this BYTE value (default 255) 00:06:45.076 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.076 [-y verify result if this switch is on] 00:06:45.076 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.076 Can be used to spread operations across a wider range of memory. 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.076 00:06:45.076 real 0m0.010s 00:06:45.076 user 0m0.009s 00:06:45.076 sys 0m0.000s 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.076 06:19:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 ************************************ 00:06:45.076 END TEST accel_wrong_workload 00:06:45.076 ************************************ 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.076 06:19:57 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 ************************************ 00:06:45.076 START TEST accel_negative_buffers 00:06:45.076 ************************************ 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:45.076 06:19:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.RxMEXg -t 1 -w xor -y -x -1 00:06:45.076 -x option must be non-negative. 00:06:45.076 [2024-07-23 06:19:57.458479] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:45.076 accel_perf options: 00:06:45.076 [-h help message] 00:06:45.076 [-q queue depth per core] 00:06:45.076 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.076 [-T number of threads per core 00:06:45.076 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.076 [-t time in seconds] 00:06:45.076 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.076 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:45.076 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.076 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.076 [-S for crc32c workload, use this seed value (default 0) 00:06:45.076 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.076 [-f for fill workload, use this BYTE value (default 255) 00:06:45.076 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.076 [-y verify result if this switch is on] 00:06:45.076 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.076 Can be used to spread operations across a wider range of memory. 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.076 00:06:45.076 real 0m0.011s 00:06:45.076 user 0m0.002s 00:06:45.076 sys 0m0.009s 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.076 06:19:57 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 ************************************ 00:06:45.076 END TEST accel_negative_buffers 00:06:45.076 ************************************ 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.076 06:19:57 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.076 06:19:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 ************************************ 00:06:45.076 START TEST accel_crc32c 00:06:45.076 ************************************ 00:06:45.076 06:19:57 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:45.076 06:19:57 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:45.076 06:19:57 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:45.076 06:19:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.076 06:19:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.076 06:19:57 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.076 06:19:57 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.372l7I -t 1 -w crc32c -S 32 -y 00:06:45.077 [2024-07-23 06:19:57.510529] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:45.077 [2024-07-23 06:19:57.510796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:45.644 EAL: TSC is not safe to use in SMP mode 00:06:45.644 EAL: TSC is not invariant 00:06:45.644 [2024-07-23 06:19:58.036253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.644 [2024-07-23 06:19:58.123869] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:45.644 [2024-07-23 06:19:58.134527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.644 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.645 06:19:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.052 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:47.053 06:19:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.053 00:06:47.053 real 0m1.800s 00:06:47.053 user 0m1.228s 00:06:47.053 sys 0m0.583s 00:06:47.053 06:19:59 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.053 06:19:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:47.053 ************************************ 00:06:47.053 END TEST accel_crc32c 00:06:47.053 ************************************ 00:06:47.053 06:19:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.053 06:19:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:47.053 06:19:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:47.053 06:19:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.053 06:19:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.053 ************************************ 00:06:47.053 START TEST accel_crc32c_C2 00:06:47.053 ************************************ 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:47.053 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.0hB5lf -t 1 -w crc32c -y -C 2 00:06:47.053 [2024-07-23 06:19:59.348454] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:47.053 [2024-07-23 06:19:59.348652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:47.620 EAL: TSC is not safe to use in SMP mode 00:06:47.620 EAL: TSC is not invariant 00:06:47.620 [2024-07-23 06:19:59.899083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.620 [2024-07-23 06:19:59.989631] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.620 06:19:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:47.621 [2024-07-23 06:19:59.999384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.621 06:20:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.995 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.996 00:06:48.996 real 0m1.844s 00:06:48.996 user 0m1.244s 00:06:48.996 sys 0m0.608s 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.996 ************************************ 00:06:48.996 06:20:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:48.996 END TEST accel_crc32c_C2 00:06:48.996 ************************************ 00:06:48.996 06:20:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.996 06:20:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:48.996 06:20:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.996 06:20:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.996 06:20:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.996 ************************************ 00:06:48.996 START TEST accel_copy 00:06:48.996 ************************************ 00:06:48.996 06:20:01 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:48.996 06:20:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:48.996 06:20:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:48.996 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.996 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.996 06:20:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:48.996 06:20:01 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.WnryAR -t 1 -w copy -y 00:06:48.996 [2024-07-23 06:20:01.230259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:48.996 [2024-07-23 06:20:01.230479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:49.254 EAL: TSC is not safe to use in SMP mode 00:06:49.254 EAL: TSC is not invariant 00:06:49.512 [2024-07-23 06:20:01.772174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.512 [2024-07-23 06:20:01.871876] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:49.512 [2024-07-23 06:20:01.884481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.512 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.513 06:20:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.884 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:50.885 06:20:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.885 00:06:50.885 real 0m1.842s 00:06:50.885 user 0m1.244s 00:06:50.885 sys 0m0.608s 00:06:50.885 06:20:03 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.885 06:20:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.885 ************************************ 00:06:50.885 END TEST accel_copy 00:06:50.885 ************************************ 00:06:50.885 06:20:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.885 06:20:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.885 06:20:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:50.885 06:20:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.885 06:20:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.885 ************************************ 00:06:50.885 START TEST accel_fill 00:06:50.885 ************************************ 00:06:50.885 06:20:03 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.885 06:20:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:50.885 06:20:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:50.885 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.885 06:20:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.885 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.885 06:20:03 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Ri9jb2 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.885 [2024-07-23 06:20:03.118974] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:50.885 [2024-07-23 06:20:03.119194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:51.450 EAL: TSC is not safe to use in SMP mode 00:06:51.450 EAL: TSC is not invariant 00:06:51.450 [2024-07-23 06:20:03.665775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.450 [2024-07-23 06:20:03.763885] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.450 06:20:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:51.451 [2024-07-23 06:20:03.773003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.451 06:20:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:52.826 06:20:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.826 00:06:52.826 real 0m1.819s 00:06:52.826 user 0m1.239s 00:06:52.826 sys 0m0.589s 00:06:52.826 06:20:04 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.826 ************************************ 00:06:52.826 END TEST accel_fill 00:06:52.826 ************************************ 00:06:52.826 06:20:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 06:20:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.826 06:20:04 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:52.826 06:20:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.826 06:20:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.826 06:20:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 ************************************ 00:06:52.826 START TEST accel_copy_crc32c 00:06:52.826 ************************************ 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:52.826 06:20:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pQE2pz -t 1 -w copy_crc32c -y 00:06:52.826 [2024-07-23 06:20:04.985752] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:52.826 [2024-07-23 06:20:04.986038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:53.084 EAL: TSC is not safe to use in SMP mode 00:06:53.084 EAL: TSC is not invariant 00:06:53.084 [2024-07-23 06:20:05.562559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.343 [2024-07-23 06:20:05.656073] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:53.343 [2024-07-23 06:20:05.665613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.343 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.344 06:20:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.719 00:06:54.719 real 0m1.849s 00:06:54.719 user 0m1.228s 00:06:54.719 sys 0m0.623s 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.719 ************************************ 00:06:54.719 END TEST accel_copy_crc32c 00:06:54.719 ************************************ 00:06:54.719 06:20:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:54.719 06:20:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.719 06:20:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.719 06:20:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:54.719 06:20:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.719 06:20:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.719 ************************************ 00:06:54.719 START TEST accel_copy_crc32c_C2 00:06:54.719 ************************************ 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.719 06:20:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tmVW9c -t 1 -w copy_crc32c -y -C 2 00:06:54.719 [2024-07-23 06:20:06.868540] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:54.719 [2024-07-23 06:20:06.868790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:54.977 EAL: TSC is not safe to use in SMP mode 00:06:54.977 EAL: TSC is not invariant 00:06:54.977 [2024-07-23 06:20:07.404250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.236 [2024-07-23 06:20:07.503696] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:55.236 [2024-07-23 06:20:07.511783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.236 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.237 06:20:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.204 00:06:56.204 real 0m1.809s 00:06:56.204 user 0m1.237s 00:06:56.204 sys 0m0.577s 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.204 06:20:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:56.204 ************************************ 00:06:56.204 END TEST accel_copy_crc32c_C2 00:06:56.204 ************************************ 00:06:56.204 06:20:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.204 06:20:08 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:56.204 06:20:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.204 06:20:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.204 06:20:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.204 ************************************ 00:06:56.204 START TEST accel_dualcast 00:06:56.204 ************************************ 00:06:56.204 06:20:08 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:56.204 06:20:08 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:56.204 06:20:08 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:56.204 06:20:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.204 06:20:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.204 06:20:08 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:56.204 06:20:08 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.iQ7o1E -t 1 -w dualcast -y 00:06:56.204 [2024-07-23 06:20:08.716953] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:56.204 [2024-07-23 06:20:08.717157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:56.772 EAL: TSC is not safe to use in SMP mode 00:06:56.772 EAL: TSC is not invariant 00:06:56.772 [2024-07-23 06:20:09.247825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.031 [2024-07-23 06:20:09.336407] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:57.031 [2024-07-23 06:20:09.348297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.031 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.032 06:20:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:57.992 06:20:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.992 00:06:57.992 real 0m1.799s 00:06:57.992 user 0m1.230s 00:06:57.992 sys 0m0.577s 00:06:57.992 06:20:10 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.992 ************************************ 00:06:57.992 END TEST accel_dualcast 00:06:57.992 ************************************ 00:06:57.992 06:20:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:58.250 06:20:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.250 06:20:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:58.250 06:20:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.250 06:20:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.250 06:20:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.250 ************************************ 00:06:58.250 START TEST accel_compare 00:06:58.250 ************************************ 00:06:58.250 06:20:10 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:58.250 06:20:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:58.250 06:20:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:58.250 06:20:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.250 06:20:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:58.250 06:20:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.250 06:20:10 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZjPDPu -t 1 -w compare -y 00:06:58.250 [2024-07-23 06:20:10.557073] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:58.250 [2024-07-23 06:20:10.557270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:58.816 EAL: TSC is not safe to use in SMP mode 00:06:58.817 EAL: TSC is not invariant 00:06:58.817 [2024-07-23 06:20:11.078873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.817 [2024-07-23 06:20:11.167712] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:58.817 [2024-07-23 06:20:11.177978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.817 06:20:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.206 06:20:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.206 06:20:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.206 06:20:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:00.206 06:20:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.206 00:07:00.206 real 0m1.789s 00:07:00.206 user 0m1.232s 00:07:00.206 sys 0m0.561s 00:07:00.206 06:20:12 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.206 ************************************ 00:07:00.206 END TEST accel_compare 00:07:00.206 ************************************ 00:07:00.206 06:20:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:00.206 06:20:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.206 06:20:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:00.206 06:20:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:00.206 06:20:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.206 06:20:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.206 ************************************ 00:07:00.206 START TEST accel_xor 00:07:00.206 ************************************ 00:07:00.206 06:20:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:00.206 06:20:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:00.206 06:20:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:00.206 06:20:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.206 06:20:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.206 06:20:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:00.206 06:20:12 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.g39Ldc -t 1 -w xor -y 00:07:00.206 [2024-07-23 06:20:12.388654] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:00.206 [2024-07-23 06:20:12.388871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:00.464 EAL: TSC is not safe to use in SMP mode 00:07:00.464 EAL: TSC is not invariant 00:07:00.464 [2024-07-23 06:20:12.929926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.722 [2024-07-23 06:20:13.030501] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:00.722 [2024-07-23 06:20:13.040942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.722 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.723 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.723 06:20:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.723 06:20:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.723 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.723 06:20:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.095 00:07:02.095 real 0m1.819s 00:07:02.095 user 0m1.241s 00:07:02.095 sys 0m0.589s 00:07:02.095 06:20:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.095 06:20:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:02.095 ************************************ 00:07:02.095 END TEST accel_xor 00:07:02.095 ************************************ 00:07:02.095 06:20:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.095 06:20:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:02.095 06:20:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.095 06:20:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.095 06:20:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.095 ************************************ 00:07:02.095 START TEST accel_xor 00:07:02.095 ************************************ 00:07:02.095 06:20:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:02.095 06:20:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.SsImsY -t 1 -w xor -y -x 3 00:07:02.095 [2024-07-23 06:20:14.251938] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:02.095 [2024-07-23 06:20:14.252245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:02.353 EAL: TSC is not safe to use in SMP mode 00:07:02.353 EAL: TSC is not invariant 00:07:02.353 [2024-07-23 06:20:14.796137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.611 [2024-07-23 06:20:14.881501] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.611 [2024-07-23 06:20:14.892540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.611 06:20:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:03.545 06:20:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.545 00:07:03.545 real 0m1.817s 00:07:03.545 user 0m1.251s 00:07:03.545 sys 0m0.578s 00:07:03.545 06:20:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.545 06:20:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:03.545 ************************************ 00:07:03.545 END TEST accel_xor 00:07:03.545 ************************************ 00:07:03.803 06:20:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.803 06:20:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:03.803 06:20:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:03.803 06:20:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.803 06:20:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.803 ************************************ 00:07:03.803 START TEST accel_dif_verify 00:07:03.803 ************************************ 00:07:03.803 06:20:16 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:03.803 06:20:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:03.803 06:20:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:03.803 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.803 06:20:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:03.803 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.803 06:20:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.F6l2GL -t 1 -w dif_verify 00:07:03.803 [2024-07-23 06:20:16.110960] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:03.803 [2024-07-23 06:20:16.111232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:04.371 EAL: TSC is not safe to use in SMP mode 00:07:04.371 EAL: TSC is not invariant 00:07:04.371 [2024-07-23 06:20:16.650874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.371 [2024-07-23 06:20:16.733513] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:04.371 [2024-07-23 06:20:16.741871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.371 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.372 06:20:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.802 06:20:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:05.803 06:20:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.803 00:07:05.803 real 0m1.795s 00:07:05.803 user 0m1.244s 00:07:05.803 sys 0m0.565s 00:07:05.803 06:20:17 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.803 06:20:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.803 ************************************ 00:07:05.803 END TEST accel_dif_verify 00:07:05.803 ************************************ 00:07:05.803 06:20:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.803 06:20:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:05.803 06:20:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:05.803 06:20:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.803 06:20:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.803 ************************************ 00:07:05.803 START TEST accel_dif_generate 00:07:05.803 ************************************ 00:07:05.803 06:20:17 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:05.803 06:20:17 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:05.803 06:20:17 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:05.803 06:20:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.803 06:20:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.803 06:20:17 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:05.803 06:20:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xLGnZV -t 1 -w dif_generate 00:07:05.803 [2024-07-23 06:20:17.947814] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:05.803 [2024-07-23 06:20:17.948013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:06.061 EAL: TSC is not safe to use in SMP mode 00:07:06.061 EAL: TSC is not invariant 00:07:06.061 [2024-07-23 06:20:18.495279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.321 [2024-07-23 06:20:18.579743] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:06.321 [2024-07-23 06:20:18.590560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.321 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.322 06:20:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:07.258 06:20:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.258 00:07:07.258 real 0m1.808s 00:07:07.258 user 0m1.225s 00:07:07.258 sys 0m0.593s 00:07:07.258 06:20:19 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.258 06:20:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:07.258 ************************************ 00:07:07.258 END TEST accel_dif_generate 00:07:07.258 ************************************ 00:07:07.517 06:20:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.517 06:20:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:07.517 06:20:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:07.517 06:20:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.517 06:20:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.517 ************************************ 00:07:07.517 START TEST accel_dif_generate_copy 00:07:07.517 ************************************ 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.517 06:20:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.KQbVA8 -t 1 -w dif_generate_copy 00:07:07.517 [2024-07-23 06:20:19.801834] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:07.517 [2024-07-23 06:20:19.801997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:08.083 EAL: TSC is not safe to use in SMP mode 00:07:08.083 EAL: TSC is not invariant 00:07:08.083 [2024-07-23 06:20:20.356110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.083 [2024-07-23 06:20:20.439250] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.083 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:08.084 [2024-07-23 06:20:20.450103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.084 06:20:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.462 00:07:09.462 real 0m1.839s 00:07:09.462 user 0m1.259s 00:07:09.462 sys 0m0.591s 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.462 ************************************ 00:07:09.462 END TEST accel_dif_generate_copy 00:07:09.462 ************************************ 00:07:09.462 06:20:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.462 06:20:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.462 06:20:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:09.462 06:20:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.462 06:20:21 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:09.462 06:20:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.462 06:20:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.462 ************************************ 00:07:09.462 START TEST accel_comp 00:07:09.462 ************************************ 00:07:09.462 06:20:21 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.462 06:20:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:09.462 06:20:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:09.462 06:20:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.462 06:20:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.462 06:20:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.462 06:20:21 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.XQfuEY -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.462 [2024-07-23 06:20:21.683991] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:09.462 [2024-07-23 06:20:21.684256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:09.721 EAL: TSC is not safe to use in SMP mode 00:07:09.721 EAL: TSC is not invariant 00:07:09.721 [2024-07-23 06:20:22.225937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.979 [2024-07-23 06:20:22.315986] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:09.979 [2024-07-23 06:20:22.323995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.979 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.980 06:20:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:11.352 06:20:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.352 00:07:11.352 real 0m1.814s 00:07:11.352 user 0m1.228s 00:07:11.352 sys 0m0.594s 00:07:11.352 06:20:23 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.352 ************************************ 00:07:11.352 END TEST accel_comp 00:07:11.352 ************************************ 00:07:11.352 06:20:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:11.352 06:20:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.352 06:20:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.352 06:20:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.352 06:20:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.352 06:20:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.352 ************************************ 00:07:11.352 START TEST accel_decomp 00:07:11.352 ************************************ 00:07:11.352 06:20:23 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.352 06:20:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.352 06:20:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:11.353 06:20:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 06:20:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 06:20:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.353 06:20:23 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vaoQi5 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.353 [2024-07-23 06:20:23.541440] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:11.353 [2024-07-23 06:20:23.541719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:11.611 EAL: TSC is not safe to use in SMP mode 00:07:11.611 EAL: TSC is not invariant 00:07:11.611 [2024-07-23 06:20:24.088938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.869 [2024-07-23 06:20:24.186835] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:11.869 [2024-07-23 06:20:24.196242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.869 06:20:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.240 06:20:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.240 00:07:13.240 real 0m1.829s 00:07:13.240 user 0m1.250s 00:07:13.240 sys 0m0.590s 00:07:13.240 06:20:25 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.240 06:20:25 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 ************************************ 00:07:13.240 END TEST accel_decomp 00:07:13.240 ************************************ 00:07:13.240 06:20:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.240 06:20:25 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.240 06:20:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:13.240 06:20:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.240 06:20:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 ************************************ 00:07:13.240 START TEST accel_decomp_full 00:07:13.240 ************************************ 00:07:13.240 06:20:25 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.240 06:20:25 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:13.240 06:20:25 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:13.240 06:20:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 06:20:25 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.240 06:20:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 06:20:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yYChPd -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.240 [2024-07-23 06:20:25.413258] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:13.240 [2024-07-23 06:20:25.413514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:13.498 EAL: TSC is not safe to use in SMP mode 00:07:13.498 EAL: TSC is not invariant 00:07:13.498 [2024-07-23 06:20:25.959364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.757 [2024-07-23 06:20:26.044097] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:13.757 [2024-07-23 06:20:26.052025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.757 06:20:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.158 06:20:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:15.158 06:20:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.159 06:20:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.159 00:07:15.159 real 0m1.820s 00:07:15.159 user 0m1.230s 00:07:15.159 sys 0m0.596s 00:07:15.159 06:20:27 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.159 06:20:27 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:15.159 ************************************ 00:07:15.159 END TEST accel_decomp_full 00:07:15.159 ************************************ 00:07:15.159 06:20:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.159 06:20:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.159 06:20:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:15.159 06:20:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.159 06:20:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.159 ************************************ 00:07:15.159 START TEST accel_decomp_mcore 00:07:15.159 ************************************ 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.159 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.agGWW7 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.159 [2024-07-23 06:20:27.276792] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:15.159 [2024-07-23 06:20:27.277117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:15.418 EAL: TSC is not safe to use in SMP mode 00:07:15.418 EAL: TSC is not invariant 00:07:15.418 [2024-07-23 06:20:27.831622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.418 [2024-07-23 06:20:27.910956] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:15.418 [2024-07-23 06:20:27.911024] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:15.418 [2024-07-23 06:20:27.911050] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:15.418 [2024-07-23 06:20:27.911057] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:15.418 [2024-07-23 06:20:27.924197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.418 [2024-07-23 06:20:27.924082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.418 [2024-07-23 06:20:27.924142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.418 [2024-07-23 06:20:27.924192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.418 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.775 06:20:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.711 00:07:16.711 real 0m1.823s 00:07:16.711 user 0m4.352s 00:07:16.711 sys 0m0.604s 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.711 ************************************ 00:07:16.711 END TEST accel_decomp_mcore 00:07:16.711 ************************************ 00:07:16.711 06:20:29 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:16.711 06:20:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.711 06:20:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.711 06:20:29 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:16.711 06:20:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.711 06:20:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.711 ************************************ 00:07:16.711 START TEST accel_decomp_full_mcore 00:07:16.711 ************************************ 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.711 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.sJ4TO3 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.711 [2024-07-23 06:20:29.143076] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:16.711 [2024-07-23 06:20:29.143342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:17.289 EAL: TSC is not safe to use in SMP mode 00:07:17.289 EAL: TSC is not invariant 00:07:17.289 [2024-07-23 06:20:29.696175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.289 [2024-07-23 06:20:29.779335] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:17.289 [2024-07-23 06:20:29.779395] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:17.289 [2024-07-23 06:20:29.779421] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:17.290 [2024-07-23 06:20:29.779429] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:17.290 [2024-07-23 06:20:29.788628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.290 [2024-07-23 06:20:29.788526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.290 [2024-07-23 06:20:29.788575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.290 [2024-07-23 06:20:29.788627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.290 06:20:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.666 00:07:18.666 real 0m1.837s 00:07:18.666 user 0m4.403s 00:07:18.666 sys 0m0.604s 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.666 06:20:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:18.666 ************************************ 00:07:18.666 END TEST accel_decomp_full_mcore 00:07:18.666 ************************************ 00:07:18.666 06:20:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.666 06:20:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.666 06:20:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:18.666 06:20:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.666 06:20:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.666 ************************************ 00:07:18.666 START TEST accel_decomp_mthread 00:07:18.666 ************************************ 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.666 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6DrqhH -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.666 [2024-07-23 06:20:31.023428] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:18.666 [2024-07-23 06:20:31.023663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:19.235 EAL: TSC is not safe to use in SMP mode 00:07:19.235 EAL: TSC is not invariant 00:07:19.236 [2024-07-23 06:20:31.586145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.236 [2024-07-23 06:20:31.678315] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:19.236 [2024-07-23 06:20:31.689897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.236 06:20:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.618 00:07:20.618 real 0m1.860s 00:07:20.618 user 0m1.248s 00:07:20.618 sys 0m0.607s 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.618 ************************************ 00:07:20.618 06:20:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:20.618 END TEST accel_decomp_mthread 00:07:20.618 ************************************ 00:07:20.618 06:20:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.618 06:20:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.618 06:20:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:20.618 06:20:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.618 06:20:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.618 ************************************ 00:07:20.618 START TEST accel_decomp_full_mthread 00:07:20.618 ************************************ 00:07:20.618 06:20:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.618 06:20:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:20.618 06:20:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:20.618 06:20:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.618 06:20:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.618 06:20:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.619 06:20:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pY0cyC -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.619 [2024-07-23 06:20:32.928207] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:20.619 [2024-07-23 06:20:32.928459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:21.186 EAL: TSC is not safe to use in SMP mode 00:07:21.186 EAL: TSC is not invariant 00:07:21.186 [2024-07-23 06:20:33.484830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.186 [2024-07-23 06:20:33.571280] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:21.186 [2024-07-23 06:20:33.582093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:21.186 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 06:20:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.566 00:07:22.566 real 0m1.853s 00:07:22.566 user 0m1.275s 00:07:22.566 sys 0m0.589s 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.566 ************************************ 00:07:22.566 END TEST accel_decomp_full_mthread 00:07:22.566 ************************************ 00:07:22.566 06:20:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:22.566 06:20:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.566 06:20:34 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:22.566 06:20:34 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.a6jU7m 00:07:22.566 06:20:34 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:22.566 06:20:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.566 06:20:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.566 ************************************ 00:07:22.566 START TEST accel_dif_functional_tests 00:07:22.566 ************************************ 00:07:22.566 06:20:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.a6jU7m 00:07:22.566 [2024-07-23 06:20:34.823877] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:22.566 [2024-07-23 06:20:34.824054] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:23.135 EAL: TSC is not safe to use in SMP mode 00:07:23.135 EAL: TSC is not invariant 00:07:23.135 [2024-07-23 06:20:35.382929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.135 [2024-07-23 06:20:35.471648] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:23.135 [2024-07-23 06:20:35.471716] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:23.135 [2024-07-23 06:20:35.471741] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:23.135 06:20:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:23.135 06:20:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.135 06:20:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.135 06:20:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.135 06:20:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.135 06:20:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.135 06:20:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:23.135 06:20:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:23.135 [2024-07-23 06:20:35.481777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.135 [2024-07-23 06:20:35.481712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.135 [2024-07-23 06:20:35.481771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.135 00:07:23.135 00:07:23.135 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.135 http://cunit.sourceforge.net/ 00:07:23.135 00:07:23.135 00:07:23.135 Suite: accel_dif 00:07:23.135 Test: verify: DIF generated, GUARD check ...passed 00:07:23.135 Test: verify: DIF generated, APPTAG check ...passed 00:07:23.135 Test: verify: DIF generated, REFTAG check ...passed 00:07:23.135 Test: verify: DIF not generated, GUARD check ...passed 00:07:23.135 Test: verify: DIF not generated, APPTAG check ...passed 00:07:23.135 Test: verify: DIF not generated, REFTAG check ...passed 00:07:23.135 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:23.135 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:23.135 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:23.135 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:23.135 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:23.135 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:23.135 Test: verify copy: DIF generated, GUARD check ...passed 00:07:23.135 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:23.135 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:23.135 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:23.135 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:23.135 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:23.135 Test: generate copy: DIF generated, GUARD check ...passed 00:07:23.135 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:23.135 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:23.135 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:23.135 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:23.135 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:23.135 Test: generate copy: iovecs-len validate ...passed 00:07:23.135 Test: generate copy: buffer alignment validate ...passed 00:07:23.135 00:07:23.135 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.135 suites 1 1 n/a 0 0 00:07:23.135 tests 26 26 26 0 0 00:07:23.135 asserts 115 115 115 0 n/a 00:07:23.135 00:07:23.135 Elapsed time = 0.000 seconds 00:07:23.135 [2024-07-23 06:20:35.498105] dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:23.135 [2024-07-23 06:20:35.498167] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:23.135 [2024-07-23 06:20:35.498193] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:23.135 [2024-07-23 06:20:35.498235] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:23.135 [2024-07-23 06:20:35.498302] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:23.135 [2024-07-23 06:20:35.498381] dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:23.135 [2024-07-23 06:20:35.498406] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:23.135 [2024-07-23 06:20:35.498430] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:23.135 [2024-07-23 06:20:35.498542] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:23.394 00:07:23.394 real 0m0.866s 00:07:23.394 user 0m0.419s 00:07:23.394 sys 0m0.595s 00:07:23.394 ************************************ 00:07:23.394 END TEST accel_dif_functional_tests 00:07:23.394 ************************************ 00:07:23.394 06:20:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.394 06:20:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:23.394 06:20:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.394 00:07:23.394 real 0m41.529s 00:07:23.394 user 0m33.575s 00:07:23.394 sys 0m14.882s 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:23.394 ************************************ 00:07:23.394 END TEST accel 00:07:23.394 ************************************ 00:07:23.394 06:20:35 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:23.394 06:20:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.394 06:20:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:23.394 06:20:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:23.394 06:20:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:23.394 06:20:35 -- common/autotest_common.sh@1142 -- # return 0 00:07:23.394 06:20:35 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:23.394 06:20:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.394 06:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.394 06:20:35 -- common/autotest_common.sh@10 -- # set +x 00:07:23.394 ************************************ 00:07:23.394 START TEST accel_rpc 00:07:23.394 ************************************ 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:23.394 * Looking for test storage... 00:07:23.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:23.394 06:20:35 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.394 06:20:35 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47508 00:07:23.394 06:20:35 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47508 00:07:23.394 06:20:35 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47508 ']' 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.394 06:20:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.394 [2024-07-23 06:20:35.909668] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:23.653 [2024-07-23 06:20:35.909879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:24.219 EAL: TSC is not safe to use in SMP mode 00:07:24.219 EAL: TSC is not invariant 00:07:24.219 [2024-07-23 06:20:36.492397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.219 [2024-07-23 06:20:36.596367] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:24.219 [2024-07-23 06:20:36.598965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.479 06:20:36 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.479 06:20:36 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:24.479 06:20:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:24.479 06:20:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:24.479 06:20:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:24.479 06:20:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:24.479 06:20:36 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:24.479 06:20:36 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.479 06:20:36 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.479 06:20:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.479 ************************************ 00:07:24.479 START TEST accel_assign_opcode 00:07:24.479 ************************************ 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:24.479 [2024-07-23 06:20:36.931338] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:24.479 [2024-07-23 06:20:36.939331] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.479 06:20:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:24.738 06:20:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.738 software 00:07:24.738 00:07:24.738 real 0m0.078s 00:07:24.738 user 0m0.002s 00:07:24.738 sys 0m0.018s 00:07:24.738 06:20:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.738 06:20:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:24.738 ************************************ 00:07:24.738 END TEST accel_assign_opcode 00:07:24.738 ************************************ 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:24.738 06:20:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47508 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47508 ']' 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47508 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47508 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:07:24.738 killing process with pid 47508 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47508' 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@967 -- # kill 47508 00:07:24.738 06:20:37 accel_rpc -- common/autotest_common.sh@972 -- # wait 47508 00:07:24.996 00:07:24.996 real 0m1.575s 00:07:24.996 user 0m1.334s 00:07:24.996 sys 0m0.845s 00:07:24.996 06:20:37 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.996 06:20:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.996 ************************************ 00:07:24.996 END TEST accel_rpc 00:07:24.996 ************************************ 00:07:24.996 06:20:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.996 06:20:37 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:24.996 06:20:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.996 06:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.996 06:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.996 ************************************ 00:07:24.996 START TEST app_cmdline 00:07:24.996 ************************************ 00:07:24.996 06:20:37 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:25.254 * Looking for test storage... 00:07:25.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.254 06:20:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:25.254 06:20:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47590 00:07:25.254 06:20:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47590 00:07:25.254 06:20:37 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47590 ']' 00:07:25.254 06:20:37 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.254 06:20:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:25.254 06:20:37 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.254 06:20:37 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.254 06:20:37 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.254 06:20:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.254 [2024-07-23 06:20:37.538503] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:25.254 [2024-07-23 06:20:37.538675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:25.821 EAL: TSC is not safe to use in SMP mode 00:07:25.821 EAL: TSC is not invariant 00:07:25.821 [2024-07-23 06:20:38.081983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.821 [2024-07-23 06:20:38.180428] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:25.821 [2024-07-23 06:20:38.183025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.387 06:20:38 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:26.388 { 00:07:26.388 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:07:26.388 "fields": { 00:07:26.388 "major": 24, 00:07:26.388 "minor": 9, 00:07:26.388 "patch": 0, 00:07:26.388 "suffix": "-pre", 00:07:26.388 "commit": "f7b31b2b9" 00:07:26.388 } 00:07:26.388 } 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:26.388 06:20:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:26.388 06:20:38 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.646 request: 00:07:26.646 { 00:07:26.646 "method": "env_dpdk_get_mem_stats", 00:07:26.646 "req_id": 1 00:07:26.646 } 00:07:26.646 Got JSON-RPC error response 00:07:26.646 response: 00:07:26.646 { 00:07:26.646 "code": -32601, 00:07:26.646 "message": "Method not found" 00:07:26.646 } 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.646 06:20:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47590 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47590 ']' 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47590 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47590 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:07:26.646 killing process with pid 47590 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47590' 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@967 -- # kill 47590 00:07:26.646 06:20:39 app_cmdline -- common/autotest_common.sh@972 -- # wait 47590 00:07:27.228 00:07:27.228 real 0m2.046s 00:07:27.228 user 0m2.420s 00:07:27.228 sys 0m0.781s 00:07:27.228 06:20:39 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.228 06:20:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 ************************************ 00:07:27.228 END TEST app_cmdline 00:07:27.228 ************************************ 00:07:27.228 06:20:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.228 06:20:39 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.228 06:20:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.228 06:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.228 06:20:39 -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 ************************************ 00:07:27.228 START TEST version 00:07:27.228 ************************************ 00:07:27.228 06:20:39 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.228 * Looking for test storage... 00:07:27.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:27.228 06:20:39 version -- app/version.sh@17 -- # get_header_version major 00:07:27.228 06:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # cut -f2 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.228 06:20:39 version -- app/version.sh@17 -- # major=24 00:07:27.228 06:20:39 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.228 06:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # cut -f2 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.228 06:20:39 version -- app/version.sh@18 -- # minor=9 00:07:27.228 06:20:39 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.228 06:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # cut -f2 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.228 06:20:39 version -- app/version.sh@19 -- # patch=0 00:07:27.228 06:20:39 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.228 06:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # cut -f2 00:07:27.228 06:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.228 06:20:39 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.228 06:20:39 version -- app/version.sh@22 -- # version=24.9 00:07:27.228 06:20:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.228 06:20:39 version -- app/version.sh@28 -- # version=24.9rc0 00:07:27.228 06:20:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:27.228 06:20:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.228 06:20:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:27.228 06:20:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:27.228 00:07:27.228 real 0m0.195s 00:07:27.228 user 0m0.186s 00:07:27.228 sys 0m0.099s 00:07:27.228 06:20:39 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.228 06:20:39 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 ************************************ 00:07:27.228 END TEST version 00:07:27.228 ************************************ 00:07:27.228 06:20:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.228 06:20:39 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:07:27.228 06:20:39 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:07:27.228 06:20:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.228 06:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.228 06:20:39 -- common/autotest_common.sh@10 -- # set +x 00:07:27.229 ************************************ 00:07:27.229 START TEST blockdev_general 00:07:27.229 ************************************ 00:07:27.229 06:20:39 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:07:27.487 * Looking for test storage... 00:07:27.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:27.487 06:20:39 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@673 -- # '[' FreeBSD = Linux ']' 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@678 -- # PRE_RESERVED_MEM=2048 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47725 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47725 00:07:27.487 06:20:39 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47725 ']' 00:07:27.487 06:20:39 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:27.487 06:20:39 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.487 06:20:39 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.487 06:20:39 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.487 06:20:39 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.487 06:20:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:27.487 [2024-07-23 06:20:39.877908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:27.487 [2024-07-23 06:20:39.878139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:28.053 EAL: TSC is not safe to use in SMP mode 00:07:28.053 EAL: TSC is not invariant 00:07:28.053 [2024-07-23 06:20:40.430641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.053 [2024-07-23 06:20:40.518585] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:28.053 [2024-07-23 06:20:40.520873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.620 06:20:40 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.620 06:20:40 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:07:28.620 06:20:40 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:28.620 06:20:40 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:07:28.620 06:20:40 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:28.620 06:20:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.620 06:20:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:28.620 [2024-07-23 06:20:41.003301] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:28.620 [2024-07-23 06:20:41.003381] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:28.620 00:07:28.620 [2024-07-23 06:20:41.011290] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:28.620 [2024-07-23 06:20:41.011336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:28.620 00:07:28.620 Malloc0 00:07:28.620 Malloc1 00:07:28.620 Malloc2 00:07:28.620 Malloc3 00:07:28.620 Malloc4 00:07:28.620 Malloc5 00:07:28.620 Malloc6 00:07:28.620 Malloc7 00:07:28.620 Malloc8 00:07:28.620 Malloc9 00:07:28.620 [2024-07-23 06:20:41.099297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:28.620 [2024-07-23 06:20:41.099372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.620 [2024-07-23 06:20:41.099413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21e482e3a980 00:07:28.620 [2024-07-23 06:20:41.099422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.620 [2024-07-23 06:20:41.099905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.620 [2024-07-23 06:20:41.099932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:28.620 TestPT 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:28.879 5000+0 records in 00:07:28.879 5000+0 records out 00:07:28.879 10240000 bytes transferred in 0.022119 secs (462956098 bytes/sec) 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 AIO0 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:28.879 06:20:41 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.879 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:29.140 06:20:41 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.140 06:20:41 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:29.140 06:20:41 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:29.141 06:20:41 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "af607535-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "af607535-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1590fa81-487e-e757-a1d9-c153f4a901e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1590fa81-487e-e757-a1d9-c153f4a901e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d8084e70-7c9a-7255-a0d6-a88cb54325db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d8084e70-7c9a-7255-a0d6-a88cb54325db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d0a4431d-0fe3-7150-942b-f8a27dc1b874"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0a4431d-0fe3-7150-942b-f8a27dc1b874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "613d512a-94ef-1f57-9396-fee6d165188b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "613d512a-94ef-1f57-9396-fee6d165188b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "bd860dd0-2ec2-e652-a3b6-fce2dfad7005"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd860dd0-2ec2-e652-a3b6-fce2dfad7005",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a3b9ff23-5a7f-775c-89cb-9a3526073a30"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a3b9ff23-5a7f-775c-89cb-9a3526073a30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3da6b556-2569-a050-96aa-ef046c82235f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3da6b556-2569-a050-96aa-ef046c82235f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e3f79df5-66ce-dd51-bb81-e0df13ee8078"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e3f79df5-66ce-dd51-bb81-e0df13ee8078",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "272ad2a1-f0ce-f25e-9e07-fe0f3548e42e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "272ad2a1-f0ce-f25e-9e07-fe0f3548e42e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c7aa541e-c01d-3f50-a87d-310921207d27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c7aa541e-c01d-3f50-a87d-310921207d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4f2cc74e-785b-1057-b6cf-188b4679876a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f2cc74e-785b-1057-b6cf-188b4679876a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "af6df42d-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "af6df42d-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af6df42d-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "af65564c-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "af668edd-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "af6f1bb9-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "af6f1bb9-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af6f1bb9-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "af67c757-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "af68ffe1-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "af7053d6-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "af7053d6-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af7053d6-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af6a38e5-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "af6b70e1-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "af7843aa-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "af7843aa-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:29.141 06:20:41 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:29.141 06:20:41 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:07:29.141 06:20:41 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:29.141 06:20:41 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 47725 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47725 ']' 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47725 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47725 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:07:29.141 06:20:41 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47725' 00:07:29.141 killing process with pid 47725 00:07:29.142 06:20:41 blockdev_general -- common/autotest_common.sh@967 -- # kill 47725 00:07:29.142 06:20:41 blockdev_general -- common/autotest_common.sh@972 -- # wait 47725 00:07:29.400 06:20:41 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:29.400 06:20:41 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:29.400 06:20:41 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.400 06:20:41 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.400 06:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:29.400 ************************************ 00:07:29.400 START TEST bdev_hello_world 00:07:29.400 ************************************ 00:07:29.400 06:20:41 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:29.400 [2024-07-23 06:20:41.833140] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:29.400 [2024-07-23 06:20:41.833309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:29.974 EAL: TSC is not safe to use in SMP mode 00:07:29.974 EAL: TSC is not invariant 00:07:29.974 [2024-07-23 06:20:42.375211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.974 [2024-07-23 06:20:42.456651] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:29.974 [2024-07-23 06:20:42.458852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.233 [2024-07-23 06:20:42.517894] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:30.233 [2024-07-23 06:20:42.517962] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:30.233 [2024-07-23 06:20:42.525867] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:30.233 [2024-07-23 06:20:42.525908] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:30.233 [2024-07-23 06:20:42.533879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:30.233 [2024-07-23 06:20:42.533912] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:30.233 [2024-07-23 06:20:42.533922] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:30.233 [2024-07-23 06:20:42.581892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:30.233 [2024-07-23 06:20:42.581976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.233 [2024-07-23 06:20:42.581987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x38ec75636800 00:07:30.233 [2024-07-23 06:20:42.581996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.233 [2024-07-23 06:20:42.582506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.233 [2024-07-23 06:20:42.582531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:30.233 [2024-07-23 06:20:42.682002] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:30.233 [2024-07-23 06:20:42.682099] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:30.233 [2024-07-23 06:20:42.682116] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:30.233 [2024-07-23 06:20:42.682135] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:30.233 [2024-07-23 06:20:42.682154] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:30.233 [2024-07-23 06:20:42.682164] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:30.233 [2024-07-23 06:20:42.682176] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:30.233 00:07:30.233 [2024-07-23 06:20:42.682185] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:30.492 00:07:30.492 real 0m1.105s 00:07:30.492 user 0m0.528s 00:07:30.492 sys 0m0.576s 00:07:30.492 06:20:42 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.492 ************************************ 00:07:30.492 END TEST bdev_hello_world 00:07:30.492 ************************************ 00:07:30.492 06:20:42 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:30.492 06:20:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:30.492 06:20:42 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:30.492 06:20:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.492 06:20:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.492 06:20:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.492 ************************************ 00:07:30.492 START TEST bdev_bounds 00:07:30.492 ************************************ 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=47777 00:07:30.492 Process bdevio pid: 47777 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 47777' 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 47777 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47777 ']' 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.492 06:20:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:30.492 [2024-07-23 06:20:42.991866] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:30.492 [2024-07-23 06:20:42.992166] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:31.427 EAL: TSC is not safe to use in SMP mode 00:07:31.427 EAL: TSC is not invariant 00:07:31.427 [2024-07-23 06:20:43.609704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.427 [2024-07-23 06:20:43.698842] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:31.427 [2024-07-23 06:20:43.698924] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:31.427 [2024-07-23 06:20:43.698949] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:31.427 [2024-07-23 06:20:43.702499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.427 [2024-07-23 06:20:43.702383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.427 [2024-07-23 06:20:43.702494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.427 [2024-07-23 06:20:43.762098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:31.427 [2024-07-23 06:20:43.762161] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:31.427 [2024-07-23 06:20:43.770079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:31.427 [2024-07-23 06:20:43.770113] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:31.427 [2024-07-23 06:20:43.778090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:31.427 [2024-07-23 06:20:43.778123] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:31.427 [2024-07-23 06:20:43.778132] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:31.427 [2024-07-23 06:20:43.826114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:31.427 [2024-07-23 06:20:43.826177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.427 [2024-07-23 06:20:43.826189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11931236800 00:07:31.427 [2024-07-23 06:20:43.826197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.427 [2024-07-23 06:20:43.826700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.427 [2024-07-23 06:20:43.826729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:31.685 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.685 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:31.685 06:20:44 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:31.685 I/O targets: 00:07:31.685 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:31.685 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:31.685 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:31.685 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:31.685 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:31.685 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:31.685 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:31.685 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:31.685 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:31.685 00:07:31.685 00:07:31.685 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.685 http://cunit.sourceforge.net/ 00:07:31.685 00:07:31.685 00:07:31.685 Suite: bdevio tests on: AIO0 00:07:31.685 Test: blockdev write read block ...passed 00:07:31.685 Test: blockdev write zeroes read block ...passed 00:07:31.685 Test: blockdev write zeroes read no split ...passed 00:07:31.962 Test: blockdev write zeroes read split ...passed 00:07:31.962 Test: blockdev write zeroes read split partial ...passed 00:07:31.962 Test: blockdev reset ...passed 00:07:31.962 Test: blockdev write read 8 blocks ...passed 00:07:31.962 Test: blockdev write read size > 128k ...passed 00:07:31.962 Test: blockdev write read invalid size ...passed 00:07:31.962 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.962 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.962 Test: blockdev write read max offset ...passed 00:07:31.962 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.962 Test: blockdev writev readv 8 blocks ...passed 00:07:31.962 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.962 Test: blockdev writev readv block ...passed 00:07:31.962 Test: blockdev writev readv size > 128k ...passed 00:07:31.962 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.962 Test: blockdev comparev and writev ...passed 00:07:31.962 Test: blockdev nvme passthru rw ...passed 00:07:31.962 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.962 Test: blockdev nvme admin passthru ...passed 00:07:31.962 Test: blockdev copy ...passed 00:07:31.962 Suite: bdevio tests on: raid1 00:07:31.962 Test: blockdev write read block ...passed 00:07:31.962 Test: blockdev write zeroes read block ...passed 00:07:31.962 Test: blockdev write zeroes read no split ...passed 00:07:31.962 Test: blockdev write zeroes read split ...passed 00:07:31.962 Test: blockdev write zeroes read split partial ...passed 00:07:31.962 Test: blockdev reset ...passed 00:07:31.962 Test: blockdev write read 8 blocks ...passed 00:07:31.962 Test: blockdev write read size > 128k ...passed 00:07:31.962 Test: blockdev write read invalid size ...passed 00:07:31.962 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.962 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.962 Test: blockdev write read max offset ...passed 00:07:31.962 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.962 Test: blockdev writev readv 8 blocks ...passed 00:07:31.962 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.962 Test: blockdev writev readv block ...passed 00:07:31.962 Test: blockdev writev readv size > 128k ...passed 00:07:31.962 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.962 Test: blockdev comparev and writev ...passed 00:07:31.962 Test: blockdev nvme passthru rw ...passed 00:07:31.962 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.962 Test: blockdev nvme admin passthru ...passed 00:07:31.962 Test: blockdev copy ...passed 00:07:31.962 Suite: bdevio tests on: concat0 00:07:31.962 Test: blockdev write read block ...passed 00:07:31.962 Test: blockdev write zeroes read block ...passed 00:07:31.962 Test: blockdev write zeroes read no split ...passed 00:07:31.962 Test: blockdev write zeroes read split ...passed 00:07:31.962 Test: blockdev write zeroes read split partial ...passed 00:07:31.962 Test: blockdev reset ...passed 00:07:31.962 Test: blockdev write read 8 blocks ...passed 00:07:31.962 Test: blockdev write read size > 128k ...passed 00:07:31.962 Test: blockdev write read invalid size ...passed 00:07:31.962 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.962 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.962 Test: blockdev write read max offset ...passed 00:07:31.962 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.962 Test: blockdev writev readv 8 blocks ...passed 00:07:31.962 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.962 Test: blockdev writev readv block ...passed 00:07:31.962 Test: blockdev writev readv size > 128k ...passed 00:07:31.962 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.962 Test: blockdev comparev and writev ...passed 00:07:31.962 Test: blockdev nvme passthru rw ...passed 00:07:31.962 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.962 Test: blockdev nvme admin passthru ...passed 00:07:31.962 Test: blockdev copy ...passed 00:07:31.962 Suite: bdevio tests on: raid0 00:07:31.962 Test: blockdev write read block ...passed 00:07:31.962 Test: blockdev write zeroes read block ...passed 00:07:31.962 Test: blockdev write zeroes read no split ...passed 00:07:31.962 Test: blockdev write zeroes read split ...passed 00:07:31.962 Test: blockdev write zeroes read split partial ...passed 00:07:31.962 Test: blockdev reset ...passed 00:07:31.962 Test: blockdev write read 8 blocks ...passed 00:07:31.962 Test: blockdev write read size > 128k ...passed 00:07:31.962 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.963 Test: blockdev nvme admin passthru ...passed 00:07:31.963 Test: blockdev copy ...passed 00:07:31.963 Suite: bdevio tests on: TestPT 00:07:31.963 Test: blockdev write read block ...passed 00:07:31.963 Test: blockdev write zeroes read block ...passed 00:07:31.963 Test: blockdev write zeroes read no split ...passed 00:07:31.963 Test: blockdev write zeroes read split ...passed 00:07:31.963 Test: blockdev write zeroes read split partial ...passed 00:07:31.963 Test: blockdev reset ...passed 00:07:31.963 Test: blockdev write read 8 blocks ...passed 00:07:31.963 Test: blockdev write read size > 128k ...passed 00:07:31.963 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.963 Test: blockdev nvme admin passthru ...passed 00:07:31.963 Test: blockdev copy ...passed 00:07:31.963 Suite: bdevio tests on: Malloc2p7 00:07:31.963 Test: blockdev write read block ...passed 00:07:31.963 Test: blockdev write zeroes read block ...passed 00:07:31.963 Test: blockdev write zeroes read no split ...passed 00:07:31.963 Test: blockdev write zeroes read split ...passed 00:07:31.963 Test: blockdev write zeroes read split partial ...passed 00:07:31.963 Test: blockdev reset ...passed 00:07:31.963 Test: blockdev write read 8 blocks ...passed 00:07:31.963 Test: blockdev write read size > 128k ...passed 00:07:31.963 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.963 Test: blockdev nvme admin passthru ...passed 00:07:31.963 Test: blockdev copy ...passed 00:07:31.963 Suite: bdevio tests on: Malloc2p6 00:07:31.963 Test: blockdev write read block ...passed 00:07:31.963 Test: blockdev write zeroes read block ...passed 00:07:31.963 Test: blockdev write zeroes read no split ...passed 00:07:31.963 Test: blockdev write zeroes read split ...passed 00:07:31.963 Test: blockdev write zeroes read split partial ...passed 00:07:31.963 Test: blockdev reset ...passed 00:07:31.963 Test: blockdev write read 8 blocks ...passed 00:07:31.963 Test: blockdev write read size > 128k ...passed 00:07:31.963 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.963 Test: blockdev nvme admin passthru ...passed 00:07:31.963 Test: blockdev copy ...passed 00:07:31.963 Suite: bdevio tests on: Malloc2p5 00:07:31.963 Test: blockdev write read block ...passed 00:07:31.963 Test: blockdev write zeroes read block ...passed 00:07:31.963 Test: blockdev write zeroes read no split ...passed 00:07:31.963 Test: blockdev write zeroes read split ...passed 00:07:31.963 Test: blockdev write zeroes read split partial ...passed 00:07:31.963 Test: blockdev reset ...passed 00:07:31.963 Test: blockdev write read 8 blocks ...passed 00:07:31.963 Test: blockdev write read size > 128k ...passed 00:07:31.963 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.963 Test: blockdev nvme admin passthru ...passed 00:07:31.963 Test: blockdev copy ...passed 00:07:31.963 Suite: bdevio tests on: Malloc2p4 00:07:31.963 Test: blockdev write read block ...passed 00:07:31.963 Test: blockdev write zeroes read block ...passed 00:07:31.963 Test: blockdev write zeroes read no split ...passed 00:07:31.963 Test: blockdev write zeroes read split ...passed 00:07:31.963 Test: blockdev write zeroes read split partial ...passed 00:07:31.963 Test: blockdev reset ...passed 00:07:31.963 Test: blockdev write read 8 blocks ...passed 00:07:31.963 Test: blockdev write read size > 128k ...passed 00:07:31.963 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.963 Test: blockdev nvme admin passthru ...passed 00:07:31.963 Test: blockdev copy ...passed 00:07:31.963 Suite: bdevio tests on: Malloc2p3 00:07:31.963 Test: blockdev write read block ...passed 00:07:31.963 Test: blockdev write zeroes read block ...passed 00:07:31.963 Test: blockdev write zeroes read no split ...passed 00:07:31.963 Test: blockdev write zeroes read split ...passed 00:07:31.963 Test: blockdev write zeroes read split partial ...passed 00:07:31.963 Test: blockdev reset ...passed 00:07:31.963 Test: blockdev write read 8 blocks ...passed 00:07:31.963 Test: blockdev write read size > 128k ...passed 00:07:31.963 Test: blockdev write read invalid size ...passed 00:07:31.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.963 Test: blockdev write read max offset ...passed 00:07:31.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.963 Test: blockdev writev readv 8 blocks ...passed 00:07:31.963 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.963 Test: blockdev writev readv block ...passed 00:07:31.963 Test: blockdev writev readv size > 128k ...passed 00:07:31.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.963 Test: blockdev comparev and writev ...passed 00:07:31.963 Test: blockdev nvme passthru rw ...passed 00:07:31.963 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 Suite: bdevio tests on: Malloc2p2 00:07:31.964 Test: blockdev write read block ...passed 00:07:31.964 Test: blockdev write zeroes read block ...passed 00:07:31.964 Test: blockdev write zeroes read no split ...passed 00:07:31.964 Test: blockdev write zeroes read split ...passed 00:07:31.964 Test: blockdev write zeroes read split partial ...passed 00:07:31.964 Test: blockdev reset ...passed 00:07:31.964 Test: blockdev write read 8 blocks ...passed 00:07:31.964 Test: blockdev write read size > 128k ...passed 00:07:31.964 Test: blockdev write read invalid size ...passed 00:07:31.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.964 Test: blockdev write read max offset ...passed 00:07:31.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.964 Test: blockdev writev readv 8 blocks ...passed 00:07:31.964 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.964 Test: blockdev writev readv block ...passed 00:07:31.964 Test: blockdev writev readv size > 128k ...passed 00:07:31.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.964 Test: blockdev comparev and writev ...passed 00:07:31.964 Test: blockdev nvme passthru rw ...passed 00:07:31.964 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 Suite: bdevio tests on: Malloc2p1 00:07:31.964 Test: blockdev write read block ...passed 00:07:31.964 Test: blockdev write zeroes read block ...passed 00:07:31.964 Test: blockdev write zeroes read no split ...passed 00:07:31.964 Test: blockdev write zeroes read split ...passed 00:07:31.964 Test: blockdev write zeroes read split partial ...passed 00:07:31.964 Test: blockdev reset ...passed 00:07:31.964 Test: blockdev write read 8 blocks ...passed 00:07:31.964 Test: blockdev write read size > 128k ...passed 00:07:31.964 Test: blockdev write read invalid size ...passed 00:07:31.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.964 Test: blockdev write read max offset ...passed 00:07:31.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.964 Test: blockdev writev readv 8 blocks ...passed 00:07:31.964 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.964 Test: blockdev writev readv block ...passed 00:07:31.964 Test: blockdev writev readv size > 128k ...passed 00:07:31.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.964 Test: blockdev comparev and writev ...passed 00:07:31.964 Test: blockdev nvme passthru rw ...passed 00:07:31.964 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 Suite: bdevio tests on: Malloc2p0 00:07:31.964 Test: blockdev write read block ...passed 00:07:31.964 Test: blockdev write zeroes read block ...passed 00:07:31.964 Test: blockdev write zeroes read no split ...passed 00:07:31.964 Test: blockdev write zeroes read split ...passed 00:07:31.964 Test: blockdev write zeroes read split partial ...passed 00:07:31.964 Test: blockdev reset ...passed 00:07:31.964 Test: blockdev write read 8 blocks ...passed 00:07:31.964 Test: blockdev write read size > 128k ...passed 00:07:31.964 Test: blockdev write read invalid size ...passed 00:07:31.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.964 Test: blockdev write read max offset ...passed 00:07:31.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.964 Test: blockdev writev readv 8 blocks ...passed 00:07:31.964 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.964 Test: blockdev writev readv block ...passed 00:07:31.964 Test: blockdev writev readv size > 128k ...passed 00:07:31.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.964 Test: blockdev comparev and writev ...passed 00:07:31.964 Test: blockdev nvme passthru rw ...passed 00:07:31.964 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 Suite: bdevio tests on: Malloc1p1 00:07:31.964 Test: blockdev write read block ...passed 00:07:31.964 Test: blockdev write zeroes read block ...passed 00:07:31.964 Test: blockdev write zeroes read no split ...passed 00:07:31.964 Test: blockdev write zeroes read split ...passed 00:07:31.964 Test: blockdev write zeroes read split partial ...passed 00:07:31.964 Test: blockdev reset ...passed 00:07:31.964 Test: blockdev write read 8 blocks ...passed 00:07:31.964 Test: blockdev write read size > 128k ...passed 00:07:31.964 Test: blockdev write read invalid size ...passed 00:07:31.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.964 Test: blockdev write read max offset ...passed 00:07:31.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.964 Test: blockdev writev readv 8 blocks ...passed 00:07:31.964 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.964 Test: blockdev writev readv block ...passed 00:07:31.964 Test: blockdev writev readv size > 128k ...passed 00:07:31.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.964 Test: blockdev comparev and writev ...passed 00:07:31.964 Test: blockdev nvme passthru rw ...passed 00:07:31.964 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 Suite: bdevio tests on: Malloc1p0 00:07:31.964 Test: blockdev write read block ...passed 00:07:31.964 Test: blockdev write zeroes read block ...passed 00:07:31.964 Test: blockdev write zeroes read no split ...passed 00:07:31.964 Test: blockdev write zeroes read split ...passed 00:07:31.964 Test: blockdev write zeroes read split partial ...passed 00:07:31.964 Test: blockdev reset ...passed 00:07:31.964 Test: blockdev write read 8 blocks ...passed 00:07:31.964 Test: blockdev write read size > 128k ...passed 00:07:31.964 Test: blockdev write read invalid size ...passed 00:07:31.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.964 Test: blockdev write read max offset ...passed 00:07:31.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.964 Test: blockdev writev readv 8 blocks ...passed 00:07:31.964 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.964 Test: blockdev writev readv block ...passed 00:07:31.964 Test: blockdev writev readv size > 128k ...passed 00:07:31.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.964 Test: blockdev comparev and writev ...passed 00:07:31.964 Test: blockdev nvme passthru rw ...passed 00:07:31.964 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 Suite: bdevio tests on: Malloc0 00:07:31.964 Test: blockdev write read block ...passed 00:07:31.964 Test: blockdev write zeroes read block ...passed 00:07:31.964 Test: blockdev write zeroes read no split ...passed 00:07:31.964 Test: blockdev write zeroes read split ...passed 00:07:31.964 Test: blockdev write zeroes read split partial ...passed 00:07:31.964 Test: blockdev reset ...passed 00:07:31.964 Test: blockdev write read 8 blocks ...passed 00:07:31.964 Test: blockdev write read size > 128k ...passed 00:07:31.964 Test: blockdev write read invalid size ...passed 00:07:31.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.964 Test: blockdev write read max offset ...passed 00:07:31.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.964 Test: blockdev writev readv 8 blocks ...passed 00:07:31.964 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.964 Test: blockdev writev readv block ...passed 00:07:31.964 Test: blockdev writev readv size > 128k ...passed 00:07:31.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.964 Test: blockdev comparev and writev ...passed 00:07:31.964 Test: blockdev nvme passthru rw ...passed 00:07:31.964 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.964 Test: blockdev nvme admin passthru ...passed 00:07:31.964 Test: blockdev copy ...passed 00:07:31.964 00:07:31.964 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.964 suites 16 16 n/a 0 0 00:07:31.964 tests 368 368 368 0 0 00:07:31.964 asserts 2224 2224 2224 0 n/a 00:07:31.964 00:07:31.964 Elapsed time = 0.539 seconds 00:07:31.964 0 00:07:31.964 06:20:44 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 47777 00:07:31.964 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47777 ']' 00:07:31.964 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47777 00:07:31.964 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:31.964 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:31.964 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47777 00:07:31.965 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:07:31.965 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:07:31.965 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:07:31.965 killing process with pid 47777 00:07:31.965 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47777' 00:07:31.965 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47777 00:07:31.965 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47777 00:07:32.224 06:20:44 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:32.224 00:07:32.224 real 0m1.732s 00:07:32.224 user 0m3.202s 00:07:32.224 sys 0m0.852s 00:07:32.224 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.224 06:20:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:32.224 ************************************ 00:07:32.224 END TEST bdev_bounds 00:07:32.224 ************************************ 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:32.482 06:20:44 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:32.482 ************************************ 00:07:32.482 START TEST bdev_nbd 00:07:32.482 ************************************ 00:07:32.482 06:20:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:32.482 06:20:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:32.482 06:20:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ FreeBSD == Linux ]] 00:07:32.482 06:20:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # return 0 00:07:32.482 00:07:32.482 real 0m0.005s 00:07:32.482 user 0m0.003s 00:07:32.482 sys 0m0.002s 00:07:32.482 06:20:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.482 06:20:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:32.482 ************************************ 00:07:32.482 END TEST bdev_nbd 00:07:32.482 ************************************ 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:32.482 06:20:44 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:32.482 06:20:44 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:07:32.482 06:20:44 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:07:32.482 06:20:44 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.482 06:20:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:32.482 ************************************ 00:07:32.482 START TEST bdev_fio 00:07:32.482 ************************************ 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:07:32.482 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:07:32.482 06:20:44 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:07:33.420 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.421 06:20:45 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:33.421 ************************************ 00:07:33.421 START TEST bdev_fio_rw_verify 00:07:33.421 ************************************ 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:07:33.421 06:20:45 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:33.421 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:33.421 fio-3.35 00:07:33.421 Starting 16 threads 00:07:33.989 EAL: TSC is not safe to use in SMP mode 00:07:33.989 EAL: TSC is not invariant 00:07:46.224 00:07:46.224 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101355: Tue Jul 23 06:20:57 2024 00:07:46.224 read: IOPS=227k, BW=885MiB/s (928MB/s)(8856MiB/10002msec) 00:07:46.224 slat (nsec): min=290, max=566940k, avg=3867.58, stdev=543455.46 00:07:46.224 clat (nsec): min=875, max=566968k, avg=48054.91, stdev=1698228.49 00:07:46.224 lat (usec): min=2, max=566968, avg=51.92, stdev=1783.10 00:07:46.224 clat percentiles (usec): 00:07:46.224 | 50.000th=[ 10], 99.000th=[ 717], 99.900th=[ 1188], 00:07:46.224 | 99.990th=[ 87557], 99.999th=[212861] 00:07:46.224 write: IOPS=386k, BW=1506MiB/s (1580MB/s)(14.4GiB/9803msec); 0 zone resets 00:07:46.224 slat (nsec): min=562, max=511841k, avg=21810.00, stdev=1017110.39 00:07:46.224 clat (nsec): min=844, max=511936k, avg=105862.29, stdev=2136049.78 00:07:46.224 lat (usec): min=12, max=511950, avg=127.67, stdev=2367.48 00:07:46.224 clat percentiles (usec): 00:07:46.224 | 50.000th=[ 51], 99.000th=[ 701], 99.900th=[ 2868], 00:07:46.224 | 99.990th=[ 95945], 99.999th=[227541] 00:07:46.224 bw ( MiB/s): min= 443, max= 2529, per=97.94%, avg=1475.42, stdev=41.79, samples=302 00:07:46.224 iops : min=113522, max=647611, avg=377708.25, stdev=10698.79, samples=302 00:07:46.224 lat (nsec) : 1000=0.01% 00:07:46.224 lat (usec) : 2=0.05%, 4=10.97%, 10=17.52%, 20=21.67%, 50=16.20% 00:07:46.224 lat (usec) : 100=28.91%, 250=2.87%, 500=0.23%, 750=0.83%, 1000=0.57% 00:07:46.224 lat (msec) : 2=0.07%, 4=0.04%, 10=0.02%, 20=0.01%, 50=0.01% 00:07:46.224 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01% 00:07:46.224 cpu : usr=55.25%, sys=3.30%, ctx=887132, majf=0, minf=606 00:07:46.224 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:46.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:46.224 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:46.224 issued rwts: total=2267054,3780473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:46.224 latency : target=0, window=0, percentile=100.00%, depth=8 00:07:46.224 00:07:46.224 Run status group 0 (all jobs): 00:07:46.224 READ: bw=885MiB/s (928MB/s), 885MiB/s-885MiB/s (928MB/s-928MB/s), io=8856MiB (9286MB), run=10002-10002msec 00:07:46.224 WRITE: bw=1506MiB/s (1580MB/s), 1506MiB/s-1506MiB/s (1580MB/s-1580MB/s), io=14.4GiB (15.5GB), run=9803-9803msec 00:07:46.224 00:07:46.224 real 0m12.565s 00:07:46.224 user 1m33.053s 00:07:46.224 sys 0m7.719s 00:07:46.224 06:20:58 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.224 06:20:58 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:07:46.224 ************************************ 00:07:46.224 END TEST bdev_fio_rw_verify 00:07:46.224 ************************************ 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:07:46.224 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "af607535-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "af607535-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1590fa81-487e-e757-a1d9-c153f4a901e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1590fa81-487e-e757-a1d9-c153f4a901e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d8084e70-7c9a-7255-a0d6-a88cb54325db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d8084e70-7c9a-7255-a0d6-a88cb54325db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d0a4431d-0fe3-7150-942b-f8a27dc1b874"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0a4431d-0fe3-7150-942b-f8a27dc1b874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "613d512a-94ef-1f57-9396-fee6d165188b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "613d512a-94ef-1f57-9396-fee6d165188b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "bd860dd0-2ec2-e652-a3b6-fce2dfad7005"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd860dd0-2ec2-e652-a3b6-fce2dfad7005",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a3b9ff23-5a7f-775c-89cb-9a3526073a30"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a3b9ff23-5a7f-775c-89cb-9a3526073a30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3da6b556-2569-a050-96aa-ef046c82235f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3da6b556-2569-a050-96aa-ef046c82235f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e3f79df5-66ce-dd51-bb81-e0df13ee8078"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e3f79df5-66ce-dd51-bb81-e0df13ee8078",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "272ad2a1-f0ce-f25e-9e07-fe0f3548e42e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "272ad2a1-f0ce-f25e-9e07-fe0f3548e42e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c7aa541e-c01d-3f50-a87d-310921207d27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c7aa541e-c01d-3f50-a87d-310921207d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4f2cc74e-785b-1057-b6cf-188b4679876a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f2cc74e-785b-1057-b6cf-188b4679876a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "af6df42d-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "af6df42d-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af6df42d-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "af65564c-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "af668edd-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "af6f1bb9-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "af6f1bb9-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af6f1bb9-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "af67c757-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "af68ffe1-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "af7053d6-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "af7053d6-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af7053d6-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af6a38e5-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "af6b70e1-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "af7843aa-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "af7843aa-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:07:46.225 Malloc1p0 00:07:46.225 Malloc1p1 00:07:46.225 Malloc2p0 00:07:46.225 Malloc2p1 00:07:46.225 Malloc2p2 00:07:46.225 Malloc2p3 00:07:46.225 Malloc2p4 00:07:46.225 Malloc2p5 00:07:46.225 Malloc2p6 00:07:46.225 Malloc2p7 00:07:46.225 TestPT 00:07:46.225 raid0 00:07:46.225 concat0 ]] 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "af607535-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "af607535-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1590fa81-487e-e757-a1d9-c153f4a901e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1590fa81-487e-e757-a1d9-c153f4a901e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d8084e70-7c9a-7255-a0d6-a88cb54325db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d8084e70-7c9a-7255-a0d6-a88cb54325db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d0a4431d-0fe3-7150-942b-f8a27dc1b874"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0a4431d-0fe3-7150-942b-f8a27dc1b874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "613d512a-94ef-1f57-9396-fee6d165188b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "613d512a-94ef-1f57-9396-fee6d165188b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "bd860dd0-2ec2-e652-a3b6-fce2dfad7005"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd860dd0-2ec2-e652-a3b6-fce2dfad7005",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a3b9ff23-5a7f-775c-89cb-9a3526073a30"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a3b9ff23-5a7f-775c-89cb-9a3526073a30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3da6b556-2569-a050-96aa-ef046c82235f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3da6b556-2569-a050-96aa-ef046c82235f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e3f79df5-66ce-dd51-bb81-e0df13ee8078"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e3f79df5-66ce-dd51-bb81-e0df13ee8078",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "272ad2a1-f0ce-f25e-9e07-fe0f3548e42e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "272ad2a1-f0ce-f25e-9e07-fe0f3548e42e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c7aa541e-c01d-3f50-a87d-310921207d27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c7aa541e-c01d-3f50-a87d-310921207d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4f2cc74e-785b-1057-b6cf-188b4679876a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f2cc74e-785b-1057-b6cf-188b4679876a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "af6df42d-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "af6df42d-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af6df42d-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "af65564c-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "af668edd-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "af6f1bb9-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "af6f1bb9-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af6f1bb9-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "af67c757-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "af68ffe1-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "af7053d6-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "af7053d6-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "af7053d6-48bb-11ef-a06c-59ddad71024c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af6a38e5-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "af6b70e1-48bb-11ef-a06c-59ddad71024c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "af7843aa-48bb-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "af7843aa-48bb-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.225 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.226 06:20:58 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:46.226 ************************************ 00:07:46.226 START TEST bdev_fio_trim 00:07:46.226 ************************************ 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:07:46.226 06:20:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:07:46.226 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:46.226 fio-3.35 00:07:46.226 Starting 14 threads 00:07:46.485 EAL: TSC is not safe to use in SMP mode 00:07:46.485 EAL: TSC is not invariant 00:07:58.688 00:07:58.688 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101374: Tue Jul 23 06:21:09 2024 00:07:58.688 write: IOPS=2448k, BW=9561MiB/s (10.0GB/s)(93.4GiB/10002msec); 0 zone resets 00:07:58.688 slat (nsec): min=279, max=968515k, avg=1468.16, stdev=282730.35 00:07:58.688 clat (nsec): min=1356, max=968530k, avg=15322.67, stdev=835183.27 00:07:58.688 lat (usec): min=2, max=968530, avg=16.79, stdev=881.74 00:07:58.688 clat percentiles (usec): 00:07:58.688 | 50.000th=[ 7], 99.000th=[ 16], 99.900th=[ 955], 99.990th=[ 979], 00:07:58.688 | 99.999th=[94897] 00:07:58.688 bw ( MiB/s): min= 2917, max=14890, per=100.00%, avg=9773.23, stdev=292.84, samples=259 00:07:58.688 iops : min=746980, max=3811870, avg=2501946.82, stdev=74967.98, samples=259 00:07:58.688 trim: IOPS=2448k, BW=9561MiB/s (10.0GB/s)(93.4GiB/10002msec); 0 zone resets 00:07:58.688 slat (nsec): min=567, max=285729k, avg=1399.29, stdev=196003.16 00:07:58.688 clat (nsec): min=405, max=1315.0M, avg=11018.10, stdev=809655.91 00:07:58.688 lat (nsec): min=1682, max=1315.0M, avg=12417.40, stdev=833048.07 00:07:58.688 clat percentiles (usec): 00:07:58.688 | 50.000th=[ 8], 99.000th=[ 16], 99.900th=[ 23], 99.990th=[ 39], 00:07:58.688 | 99.999th=[94897] 00:07:58.688 bw ( MiB/s): min= 2917, max=14890, per=100.00%, avg=9773.24, stdev=292.84, samples=259 00:07:58.688 iops : min=746980, max=3811868, avg=2501948.61, stdev=74967.96, samples=259 00:07:58.688 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:07:58.688 lat (usec) : 2=0.10%, 4=23.90%, 10=58.25%, 20=17.30%, 50=0.19% 00:07:58.688 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.20% 00:07:58.688 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:07:58.688 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:07:58.688 lat (msec) : 2000=0.01% 00:07:58.688 cpu : usr=63.82%, sys=4.06%, ctx=1112781, majf=0, minf=0 00:07:58.688 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:58.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:58.688 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:58.688 issued rwts: total=0,24480632,24480636,0 short=0,0,0,0 dropped=0,0,0,0 00:07:58.688 latency : target=0, window=0, percentile=100.00%, depth=8 00:07:58.688 00:07:58.688 Run status group 0 (all jobs): 00:07:58.688 WRITE: bw=9561MiB/s (10.0GB/s), 9561MiB/s-9561MiB/s (10.0GB/s-10.0GB/s), io=93.4GiB (100GB), run=10002-10002msec 00:07:58.688 TRIM: bw=9561MiB/s (10.0GB/s), 9561MiB/s-9561MiB/s (10.0GB/s-10.0GB/s), io=93.4GiB (100GB), run=10002-10002msec 00:07:58.688 00:07:58.688 real 0m12.678s 00:07:58.688 user 1m35.107s 00:07:58.688 sys 0m9.010s 00:07:58.688 06:21:10 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.688 06:21:10 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 ************************************ 00:07:58.688 END TEST bdev_fio_trim 00:07:58.688 ************************************ 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:58.688 /home/vagrant/spdk_repo/spdk 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:07:58.688 00:07:58.688 real 0m26.222s 00:07:58.688 user 3m8.437s 00:07:58.688 sys 0m17.385s 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.688 06:21:11 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 ************************************ 00:07:58.688 END TEST bdev_fio 00:07:58.688 ************************************ 00:07:58.688 06:21:11 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:58.688 06:21:11 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.688 06:21:11 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:58.688 06:21:11 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:07:58.688 06:21:11 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.688 06:21:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 ************************************ 00:07:58.688 START TEST bdev_verify 00:07:58.688 ************************************ 00:07:58.688 06:21:11 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:58.688 [2024-07-23 06:21:11.092806] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:58.688 [2024-07-23 06:21:11.093094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:59.254 EAL: TSC is not safe to use in SMP mode 00:07:59.254 EAL: TSC is not invariant 00:07:59.254 [2024-07-23 06:21:11.638509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.254 [2024-07-23 06:21:11.736262] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:59.254 [2024-07-23 06:21:11.736317] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:59.254 [2024-07-23 06:21:11.739450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.254 [2024-07-23 06:21:11.739435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.512 [2024-07-23 06:21:11.799398] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:59.512 [2024-07-23 06:21:11.799446] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:59.512 [2024-07-23 06:21:11.807378] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:59.512 [2024-07-23 06:21:11.807406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:59.512 [2024-07-23 06:21:11.815394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:59.512 [2024-07-23 06:21:11.815422] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:59.512 [2024-07-23 06:21:11.815432] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:59.512 [2024-07-23 06:21:11.863411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:59.512 [2024-07-23 06:21:11.863466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.512 [2024-07-23 06:21:11.863480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6f112a36800 00:07:59.512 [2024-07-23 06:21:11.863490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.512 [2024-07-23 06:21:11.863911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.512 [2024-07-23 06:21:11.863942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:59.512 Running I/O for 5 seconds... 00:08:04.788 00:08:04.788 Latency(us) 00:08:04.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.788 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.788 Verification LBA range: start 0x0 length 0x1000 00:08:04.788 Malloc0 : 5.03 6998.00 27.34 0.00 0.00 18273.42 64.23 44087.97 00:08:04.788 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.788 Verification LBA range: start 0x1000 length 0x1000 00:08:04.788 Malloc0 : 5.04 150.31 0.59 0.00 0.00 850272.13 551.10 1998019.00 00:08:04.788 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.788 Verification LBA range: start 0x0 length 0x800 00:08:04.788 Malloc1p0 : 5.02 6166.64 24.09 0.00 0.00 20744.93 279.27 21567.36 00:08:04.788 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.788 Verification LBA range: start 0x800 length 0x800 00:08:04.788 Malloc1p0 : 5.01 6669.26 26.05 0.00 0.00 19180.46 338.85 19065.07 00:08:04.788 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.788 Verification LBA range: start 0x0 length 0x800 00:08:04.789 Malloc1p1 : 5.02 6166.28 24.09 0.00 0.00 20742.16 262.52 21090.73 00:08:04.789 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x800 length 0x800 00:08:04.789 Malloc1p1 : 5.01 6668.92 26.05 0.00 0.00 19176.70 344.44 18707.60 00:08:04.789 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p0 : 5.02 6165.97 24.09 0.00 0.00 20739.53 275.55 20733.26 00:08:04.789 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p0 : 5.01 6668.65 26.05 0.00 0.00 19173.25 318.37 18350.13 00:08:04.789 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p1 : 5.02 6165.69 24.08 0.00 0.00 20736.37 296.03 20256.63 00:08:04.789 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p1 : 5.02 6677.08 26.08 0.00 0.00 19144.98 320.23 18111.81 00:08:04.789 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p2 : 5.02 6165.39 24.08 0.00 0.00 20733.16 271.83 19780.01 00:08:04.789 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p2 : 5.02 6676.79 26.08 0.00 0.00 19141.17 353.75 18230.97 00:08:04.789 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p3 : 5.02 6165.11 24.08 0.00 0.00 20730.23 290.44 17277.72 00:08:04.789 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p3 : 5.02 6676.49 26.08 0.00 0.00 19137.48 327.68 18230.97 00:08:04.789 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p4 : 5.02 6164.82 24.08 0.00 0.00 20727.12 275.55 16801.09 00:08:04.789 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p4 : 5.02 6676.24 26.08 0.00 0.00 19133.94 329.54 18230.97 00:08:04.789 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p5 : 5.02 6164.55 24.08 0.00 0.00 20724.28 284.86 16801.09 00:08:04.789 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p5 : 5.02 6675.97 26.08 0.00 0.00 19130.20 320.23 18230.97 00:08:04.789 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p6 : 5.03 6164.26 24.08 0.00 0.00 20721.28 262.52 17635.19 00:08:04.789 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p6 : 5.02 6675.71 26.08 0.00 0.00 19126.97 312.79 18350.13 00:08:04.789 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x200 00:08:04.789 Malloc2p7 : 5.03 6163.99 24.08 0.00 0.00 20718.31 268.10 18826.75 00:08:04.789 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x200 length 0x200 00:08:04.789 Malloc2p7 : 5.02 6675.43 26.08 0.00 0.00 19123.50 318.37 18469.28 00:08:04.789 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x1000 00:08:04.789 TestPT : 5.03 6138.39 23.98 0.00 0.00 20788.63 718.66 19065.07 00:08:04.789 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x1000 length 0x1000 00:08:04.789 TestPT : 5.03 5246.73 20.50 0.00 0.00 24322.05 1511.80 65297.85 00:08:04.789 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x2000 00:08:04.789 raid0 : 5.03 6163.60 24.08 0.00 0.00 20712.15 275.55 19303.38 00:08:04.789 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x2000 length 0x2000 00:08:04.789 raid0 : 5.02 6674.69 26.07 0.00 0.00 19117.26 333.27 18350.13 00:08:04.789 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x2000 00:08:04.789 concat0 : 5.03 6163.33 24.08 0.00 0.00 20709.29 275.55 19899.16 00:08:04.789 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x2000 length 0x2000 00:08:04.789 concat0 : 5.02 6674.39 26.07 0.00 0.00 19113.57 338.85 18350.13 00:08:04.789 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x1000 00:08:04.789 raid1 : 5.03 6163.05 24.07 0.00 0.00 20705.43 314.65 20852.42 00:08:04.789 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x1000 length 0x1000 00:08:04.789 raid1 : 5.02 6674.10 26.07 0.00 0.00 19108.43 390.98 18945.91 00:08:04.789 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x0 length 0x4e2 00:08:04.789 AIO0 : 5.08 799.51 3.12 0.00 0.00 159106.70 1079.86 308854.08 00:08:04.789 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:04.789 Verification LBA range: start 0x4e2 length 0x4e2 00:08:04.789 AIO0 : 5.07 803.28 3.14 0.00 0.00 158189.66 19184.22 303134.56 00:08:04.789 =================================================================================================================== 00:08:04.789 Total : 187042.62 730.64 0.00 0.00 21864.00 64.23 1998019.00 00:08:05.048 00:08:05.048 real 0m6.238s 00:08:05.048 user 0m10.131s 00:08:05.048 sys 0m0.653s 00:08:05.048 06:21:17 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.048 ************************************ 00:08:05.048 06:21:17 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:05.048 END TEST bdev_verify 00:08:05.048 ************************************ 00:08:05.048 06:21:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:05.048 06:21:17 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:05.048 06:21:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:05.048 06:21:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.048 06:21:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:05.048 ************************************ 00:08:05.048 START TEST bdev_verify_big_io 00:08:05.048 ************************************ 00:08:05.048 06:21:17 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:05.048 [2024-07-23 06:21:17.375770] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:05.048 [2024-07-23 06:21:17.376015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:05.613 EAL: TSC is not safe to use in SMP mode 00:08:05.613 EAL: TSC is not invariant 00:08:05.613 [2024-07-23 06:21:17.938284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:05.613 [2024-07-23 06:21:18.026213] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:05.613 [2024-07-23 06:21:18.026261] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:05.613 [2024-07-23 06:21:18.029220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.613 [2024-07-23 06:21:18.029203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.613 [2024-07-23 06:21:18.087651] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:05.613 [2024-07-23 06:21:18.087722] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:05.613 [2024-07-23 06:21:18.095638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:05.613 [2024-07-23 06:21:18.095678] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:05.613 [2024-07-23 06:21:18.103658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:05.613 [2024-07-23 06:21:18.103699] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:05.613 [2024-07-23 06:21:18.103713] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:05.873 [2024-07-23 06:21:18.151676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:05.873 [2024-07-23 06:21:18.151717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.873 [2024-07-23 06:21:18.151728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x39dcfda36800 00:08:05.873 [2024-07-23 06:21:18.151736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.873 [2024-07-23 06:21:18.152122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.873 [2024-07-23 06:21:18.152153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:05.873 [2024-07-23 06:21:18.253293] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.253556] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.253761] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.253948] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.254125] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.254305] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.254485] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.254643] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.254814] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.254991] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.255168] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.255341] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.255518] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.255699] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.255872] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.256034] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:05.873 [2024-07-23 06:21:18.257745] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:05.873 [2024-07-23 06:21:18.257953] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:05.873 Running I/O for 5 seconds... 00:08:11.144 00:08:11.144 Latency(us) 00:08:11.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.144 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x100 00:08:11.144 Malloc0 : 5.06 4074.34 254.65 0.00 0.00 31333.01 83.78 82456.41 00:08:11.144 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x100 length 0x100 00:08:11.144 Malloc0 : 5.06 4050.67 253.17 0.00 0.00 31517.02 83.32 104381.24 00:08:11.144 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x80 00:08:11.144 Malloc1p0 : 5.08 1033.38 64.59 0.00 0.00 123335.45 528.76 178258.37 00:08:11.144 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x80 length 0x80 00:08:11.144 Malloc1p0 : 5.07 1412.74 88.30 0.00 0.00 90218.25 666.53 134408.72 00:08:11.144 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x80 00:08:11.144 Malloc1p1 : 5.09 528.42 33.03 0.00 0.00 240845.49 312.79 297415.04 00:08:11.144 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x80 length 0x80 00:08:11.144 Malloc1p1 : 5.09 528.60 33.04 0.00 0.00 240737.97 305.34 285976.00 00:08:11.144 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p0 : 5.07 511.32 31.96 0.00 0.00 62187.39 249.48 106287.75 00:08:11.144 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p0 : 5.07 511.49 31.97 0.00 0.00 62158.32 245.76 96278.59 00:08:11.144 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p1 : 5.07 511.29 31.96 0.00 0.00 62146.26 249.48 105334.49 00:08:11.144 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p1 : 5.07 511.46 31.97 0.00 0.00 62137.91 255.07 95325.33 00:08:11.144 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p2 : 5.07 511.26 31.95 0.00 0.00 62121.14 249.48 104381.24 00:08:11.144 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p2 : 5.07 511.43 31.96 0.00 0.00 62108.21 243.90 94848.71 00:08:11.144 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p3 : 5.07 511.23 31.95 0.00 0.00 62102.63 284.86 103904.61 00:08:11.144 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p3 : 5.07 511.41 31.96 0.00 0.00 62090.92 256.93 93895.45 00:08:11.144 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p4 : 5.07 511.21 31.95 0.00 0.00 62087.52 294.17 102951.36 00:08:11.144 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p4 : 5.07 511.38 31.96 0.00 0.00 62053.00 286.72 93418.83 00:08:11.144 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p5 : 5.07 511.18 31.95 0.00 0.00 62055.24 301.62 101998.11 00:08:11.144 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p5 : 5.07 511.35 31.96 0.00 0.00 62030.64 264.38 92465.57 00:08:11.144 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p6 : 5.07 511.15 31.95 0.00 0.00 62022.54 292.31 101044.85 00:08:11.144 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p6 : 5.07 511.32 31.96 0.00 0.00 62014.32 256.93 91512.32 00:08:11.144 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x20 00:08:11.144 Malloc2p7 : 5.07 511.13 31.95 0.00 0.00 62002.12 284.86 99614.97 00:08:11.144 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x20 length 0x20 00:08:11.144 Malloc2p7 : 5.07 511.29 31.96 0.00 0.00 61985.38 273.69 90559.07 00:08:11.144 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x100 00:08:11.144 TestPT : 5.12 519.68 32.48 0.00 0.00 242409.89 6911.09 253565.39 00:08:11.144 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x100 length 0x100 00:08:11.144 TestPT : 5.20 247.53 15.47 0.00 0.00 508233.74 11975.25 560512.96 00:08:11.144 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x200 00:08:11.144 raid0 : 5.09 528.39 33.02 0.00 0.00 239342.21 335.13 278349.98 00:08:11.144 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x200 length 0x200 00:08:11.144 raid0 : 5.09 528.58 33.04 0.00 0.00 239226.90 336.99 265004.43 00:08:11.144 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x200 00:08:11.144 concat0 : 5.09 531.44 33.21 0.00 0.00 237691.22 359.33 270723.95 00:08:11.144 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x200 length 0x200 00:08:11.144 concat0 : 5.09 531.66 33.23 0.00 0.00 237601.59 346.30 259284.91 00:08:11.144 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x100 00:08:11.144 raid1 : 5.09 531.41 33.21 0.00 0.00 237254.17 420.77 263097.92 00:08:11.144 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x100 length 0x100 00:08:11.144 raid1 : 5.08 534.92 33.43 0.00 0.00 235778.08 389.12 249752.37 00:08:11.144 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x0 length 0x4e 00:08:11.144 AIO0 : 5.08 529.22 33.08 0.00 0.00 145012.70 867.61 161099.81 00:08:11.144 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:11.144 Verification LBA range: start 0x4e length 0x4e 00:08:11.144 AIO0 : 5.08 528.38 33.02 0.00 0.00 145249.75 513.86 149660.77 00:08:11.145 =================================================================================================================== 00:08:11.145 Total : 24820.26 1551.27 0.00 0.00 98410.97 83.32 560512.96 00:08:11.403 00:08:11.403 real 0m6.398s 00:08:11.403 user 0m11.253s 00:08:11.403 sys 0m0.749s 00:08:11.403 06:21:23 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.403 ************************************ 00:08:11.403 END TEST bdev_verify_big_io 00:08:11.403 ************************************ 00:08:11.403 06:21:23 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:11.403 06:21:23 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:11.403 06:21:23 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.403 06:21:23 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:11.403 06:21:23 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.403 06:21:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:11.403 ************************************ 00:08:11.403 START TEST bdev_write_zeroes 00:08:11.403 ************************************ 00:08:11.403 06:21:23 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.403 [2024-07-23 06:21:23.819627] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:11.403 [2024-07-23 06:21:23.819830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:11.969 EAL: TSC is not safe to use in SMP mode 00:08:11.969 EAL: TSC is not invariant 00:08:11.969 [2024-07-23 06:21:24.409267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.228 [2024-07-23 06:21:24.511638] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:12.228 [2024-07-23 06:21:24.514273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.228 [2024-07-23 06:21:24.573562] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:12.228 [2024-07-23 06:21:24.573618] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:12.228 [2024-07-23 06:21:24.581552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:12.228 [2024-07-23 06:21:24.581594] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:12.228 [2024-07-23 06:21:24.589564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:12.228 [2024-07-23 06:21:24.589603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:12.228 [2024-07-23 06:21:24.589628] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:12.228 [2024-07-23 06:21:24.637588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:12.228 [2024-07-23 06:21:24.637647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.228 [2024-07-23 06:21:24.637659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf77e5836800 00:08:12.228 [2024-07-23 06:21:24.637667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.228 [2024-07-23 06:21:24.638056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.228 [2024-07-23 06:21:24.638076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:12.486 Running I/O for 1 seconds... 00:08:13.418 00:08:13.418 Latency(us) 00:08:13.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.418 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc0 : 1.01 30481.03 119.07 0.00 0.00 4198.80 161.05 7983.50 00:08:13.418 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc1p0 : 1.01 30475.16 119.04 0.00 0.00 4197.40 188.04 7804.76 00:08:13.418 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc1p1 : 1.01 30472.38 119.03 0.00 0.00 4196.13 180.60 7566.45 00:08:13.418 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p0 : 1.01 30468.15 119.02 0.00 0.00 4195.25 185.25 7357.92 00:08:13.418 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p1 : 1.01 30465.28 119.00 0.00 0.00 4194.07 213.18 7149.40 00:08:13.418 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p2 : 1.01 30462.32 118.99 0.00 0.00 4191.45 184.32 7089.82 00:08:13.418 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p3 : 1.01 30459.66 118.98 0.00 0.00 4190.53 182.46 6911.09 00:08:13.418 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p4 : 1.01 30455.31 118.97 0.00 0.00 4189.23 194.56 7000.45 00:08:13.418 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p5 : 1.01 30452.38 118.95 0.00 0.00 4188.23 182.46 7000.45 00:08:13.418 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p6 : 1.01 30448.19 118.94 0.00 0.00 4186.68 188.04 6970.67 00:08:13.418 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 Malloc2p7 : 1.01 30445.38 118.93 0.00 0.00 4185.35 184.32 7030.24 00:08:13.418 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 TestPT : 1.01 30442.58 118.92 0.00 0.00 4183.38 215.04 6881.30 00:08:13.418 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 raid0 : 1.01 30437.45 118.90 0.00 0.00 4181.81 268.10 6911.09 00:08:13.418 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 concat0 : 1.01 30434.16 118.88 0.00 0.00 4180.02 266.24 6940.88 00:08:13.418 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 raid1 : 1.01 30428.83 118.86 0.00 0.00 4177.16 474.76 6970.67 00:08:13.418 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.418 AIO0 : 1.06 2713.72 10.60 0.00 0.00 45800.10 525.03 172538.85 00:08:13.418 =================================================================================================================== 00:08:13.418 Total : 459541.98 1795.09 0.00 0.00 4445.97 161.05 172538.85 00:08:13.677 00:08:13.677 real 0m2.248s 00:08:13.677 user 0m1.482s 00:08:13.677 sys 0m0.653s 00:08:13.677 06:21:26 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.677 06:21:26 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:13.677 ************************************ 00:08:13.677 END TEST bdev_write_zeroes 00:08:13.677 ************************************ 00:08:13.677 06:21:26 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:13.677 06:21:26 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:13.677 06:21:26 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:13.677 06:21:26 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.677 06:21:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:13.677 ************************************ 00:08:13.677 START TEST bdev_json_nonenclosed 00:08:13.677 ************************************ 00:08:13.677 06:21:26 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:13.677 [2024-07-23 06:21:26.118909] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:13.677 [2024-07-23 06:21:26.119169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:14.243 EAL: TSC is not safe to use in SMP mode 00:08:14.243 EAL: TSC is not invariant 00:08:14.243 [2024-07-23 06:21:26.700837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.501 [2024-07-23 06:21:26.789419] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:14.501 [2024-07-23 06:21:26.791558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.501 [2024-07-23 06:21:26.791629] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:14.501 [2024-07-23 06:21:26.791640] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:14.501 [2024-07-23 06:21:26.791649] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.501 00:08:14.501 real 0m0.794s 00:08:14.501 user 0m0.178s 00:08:14.501 sys 0m0.614s 00:08:14.501 06:21:26 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:14.501 06:21:26 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.501 06:21:26 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 ************************************ 00:08:14.501 END TEST bdev_json_nonenclosed 00:08:14.501 ************************************ 00:08:14.501 06:21:26 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:14.501 06:21:26 blockdev_general -- bdev/blockdev.sh@781 -- # true 00:08:14.501 06:21:26 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:14.501 06:21:26 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:14.501 06:21:26 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.501 06:21:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 ************************************ 00:08:14.501 START TEST bdev_json_nonarray 00:08:14.501 ************************************ 00:08:14.501 06:21:26 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:14.501 [2024-07-23 06:21:26.958206] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:14.501 [2024-07-23 06:21:26.958452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:15.067 EAL: TSC is not safe to use in SMP mode 00:08:15.067 EAL: TSC is not invariant 00:08:15.067 [2024-07-23 06:21:27.484433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.067 [2024-07-23 06:21:27.569858] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:15.067 [2024-07-23 06:21:27.572221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.068 [2024-07-23 06:21:27.572295] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:15.068 [2024-07-23 06:21:27.572306] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:15.068 [2024-07-23 06:21:27.572314] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.326 00:08:15.326 real 0m0.742s 00:08:15.326 user 0m0.168s 00:08:15.326 sys 0m0.567s 00:08:15.326 06:21:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:15.326 06:21:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.326 ************************************ 00:08:15.326 END TEST bdev_json_nonarray 00:08:15.327 ************************************ 00:08:15.327 06:21:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:15.327 06:21:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:15.327 06:21:27 blockdev_general -- bdev/blockdev.sh@784 -- # true 00:08:15.327 06:21:27 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:08:15.327 06:21:27 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:08:15.327 06:21:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.327 06:21:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.327 06:21:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:15.327 ************************************ 00:08:15.327 START TEST bdev_qos 00:08:15.327 ************************************ 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=48186 00:08:15.327 Process qos testing pid: 48186 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 48186' 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 48186 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48186 ']' 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.327 06:21:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:15.327 [2024-07-23 06:21:27.750881] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:15.327 [2024-07-23 06:21:27.751137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:15.895 EAL: TSC is not safe to use in SMP mode 00:08:15.895 EAL: TSC is not invariant 00:08:15.895 [2024-07-23 06:21:28.293150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.895 [2024-07-23 06:21:28.392426] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:15.895 [2024-07-23 06:21:28.394972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 Malloc_0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 [ 00:08:16.463 { 00:08:16.463 "name": "Malloc_0", 00:08:16.463 "aliases": [ 00:08:16.463 "cbe2ce03-48bb-11ef-a06c-59ddad71024c" 00:08:16.463 ], 00:08:16.463 "product_name": "Malloc disk", 00:08:16.463 "block_size": 512, 00:08:16.463 "num_blocks": 262144, 00:08:16.463 "uuid": "cbe2ce03-48bb-11ef-a06c-59ddad71024c", 00:08:16.463 "assigned_rate_limits": { 00:08:16.463 "rw_ios_per_sec": 0, 00:08:16.463 "rw_mbytes_per_sec": 0, 00:08:16.463 "r_mbytes_per_sec": 0, 00:08:16.463 "w_mbytes_per_sec": 0 00:08:16.463 }, 00:08:16.463 "claimed": false, 00:08:16.463 "zoned": false, 00:08:16.463 "supported_io_types": { 00:08:16.463 "read": true, 00:08:16.463 "write": true, 00:08:16.463 "unmap": true, 00:08:16.463 "flush": true, 00:08:16.463 "reset": true, 00:08:16.463 "nvme_admin": false, 00:08:16.463 "nvme_io": false, 00:08:16.463 "nvme_io_md": false, 00:08:16.463 "write_zeroes": true, 00:08:16.463 "zcopy": true, 00:08:16.463 "get_zone_info": false, 00:08:16.463 "zone_management": false, 00:08:16.463 "zone_append": false, 00:08:16.463 "compare": false, 00:08:16.463 "compare_and_write": false, 00:08:16.463 "abort": true, 00:08:16.463 "seek_hole": false, 00:08:16.463 "seek_data": false, 00:08:16.463 "copy": true, 00:08:16.463 "nvme_iov_md": false 00:08:16.463 }, 00:08:16.463 "memory_domains": [ 00:08:16.463 { 00:08:16.463 "dma_device_id": "system", 00:08:16.463 "dma_device_type": 1 00:08:16.463 }, 00:08:16.463 { 00:08:16.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.463 "dma_device_type": 2 00:08:16.463 } 00:08:16.463 ], 00:08:16.463 "driver_specific": {} 00:08:16.463 } 00:08:16.463 ] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 Null_1 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 [ 00:08:16.463 { 00:08:16.463 "name": "Null_1", 00:08:16.463 "aliases": [ 00:08:16.463 "cbe7aee3-48bb-11ef-a06c-59ddad71024c" 00:08:16.463 ], 00:08:16.463 "product_name": "Null disk", 00:08:16.463 "block_size": 512, 00:08:16.463 "num_blocks": 262144, 00:08:16.463 "uuid": "cbe7aee3-48bb-11ef-a06c-59ddad71024c", 00:08:16.463 "assigned_rate_limits": { 00:08:16.463 "rw_ios_per_sec": 0, 00:08:16.463 "rw_mbytes_per_sec": 0, 00:08:16.463 "r_mbytes_per_sec": 0, 00:08:16.463 "w_mbytes_per_sec": 0 00:08:16.463 }, 00:08:16.463 "claimed": false, 00:08:16.463 "zoned": false, 00:08:16.463 "supported_io_types": { 00:08:16.463 "read": true, 00:08:16.463 "write": true, 00:08:16.463 "unmap": false, 00:08:16.463 "flush": false, 00:08:16.463 "reset": true, 00:08:16.463 "nvme_admin": false, 00:08:16.463 "nvme_io": false, 00:08:16.463 "nvme_io_md": false, 00:08:16.463 "write_zeroes": true, 00:08:16.463 "zcopy": false, 00:08:16.463 "get_zone_info": false, 00:08:16.463 "zone_management": false, 00:08:16.463 "zone_append": false, 00:08:16.463 "compare": false, 00:08:16.463 "compare_and_write": false, 00:08:16.463 "abort": true, 00:08:16.463 "seek_hole": false, 00:08:16.463 "seek_data": false, 00:08:16.463 "copy": false, 00:08:16.463 "nvme_iov_md": false 00:08:16.463 }, 00:08:16.463 "driver_specific": {} 00:08:16.463 } 00:08:16.463 ] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:08:16.463 06:21:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:08:16.722 Running I/O for 60 seconds... 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 628175.89 2512703.57 0.00 0.00 2717696.00 0.00 0.00 ' 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=628175.89 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 628175 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=628175 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=157000 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 157000 -gt 1000 ']' 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 157000 Malloc_0 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 157000 IOPS Malloc_0 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.997 06:21:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:21.997 ************************************ 00:08:21.997 START TEST bdev_qos_iops 00:08:21.997 ************************************ 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 157000 IOPS Malloc_0 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=157000 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:08:21.997 06:21:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 157047.51 628190.03 0.00 0.00 659168.00 0.00 0.00 ' 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=157047.51 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 157047 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=157047 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=141300 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=172700 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 157047 -lt 141300 ']' 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 157047 -gt 172700 ']' 00:08:27.269 00:08:27.269 real 0m5.349s 00:08:27.269 user 0m0.159s 00:08:27.269 sys 0m0.002s 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.269 06:21:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:08:27.269 ************************************ 00:08:27.269 END TEST bdev_qos_iops 00:08:27.269 ************************************ 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:08:27.527 06:21:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 383793.55 1535174.19 0.00 0.00 1656832.00 0.00 0.00 ' 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=1656832.00 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 1656832 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=1656832 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=161 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 161 -lt 2 ']' 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 161 Null_1 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 161 BANDWIDTH Null_1 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.797 06:21:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:32.797 ************************************ 00:08:32.797 START TEST bdev_qos_bw 00:08:32.797 ************************************ 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 161 BANDWIDTH Null_1 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=161 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:08:32.797 06:21:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 41219.38 164877.51 0.00 0.00 177720.00 0.00 0.00 ' 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=177720.00 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 177720 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=177720 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=164864 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=148377 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=181350 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 177720 -lt 148377 ']' 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 177720 -gt 181350 ']' 00:08:39.395 00:08:39.395 real 0m5.508s 00:08:39.395 user 0m0.151s 00:08:39.395 sys 0m0.009s 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:08:39.395 ************************************ 00:08:39.395 END TEST bdev_qos_bw 00:08:39.395 ************************************ 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.395 06:21:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:39.395 ************************************ 00:08:39.395 START TEST bdev_qos_ro_bw 00:08:39.395 ************************************ 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:08:39.395 06:21:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.20 2048.80 0.00 0.00 2132.00 0.00 0.00 ' 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2132.00 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2132 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2132 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2132 -lt 1843 ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2132 -gt 2252 ']' 00:08:44.665 00:08:44.665 real 0m5.493s 00:08:44.665 user 0m0.131s 00:08:44.665 sys 0m0.032s 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.665 06:21:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:08:44.665 ************************************ 00:08:44.665 END TEST bdev_qos_ro_bw 00:08:44.665 ************************************ 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:44.665 00:08:44.665 Latency(us) 00:08:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.665 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:44.665 Malloc_0 : 27.85 209852.93 819.74 0.00 0.00 1208.95 340.71 503317.76 00:08:44.665 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:44.665 Null_1 : 27.89 276657.77 1080.69 0.00 0.00 924.96 67.03 31457.36 00:08:44.665 =================================================================================================================== 00:08:44.665 Total : 486510.70 1900.43 0.00 0.00 1047.36 67.03 503317.76 00:08:44.665 0 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 48186 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48186 ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48186 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48186 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:44.665 killing process with pid 48186 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48186' 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48186 00:08:44.665 Received shutdown signal, test time was about 27.901049 seconds 00:08:44.665 00:08:44.665 Latency(us) 00:08:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.665 =================================================================================================================== 00:08:44.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.665 06:21:56 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48186 00:08:44.665 06:21:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:08:44.665 00:08:44.665 real 0m29.390s 00:08:44.665 user 0m30.162s 00:08:44.665 sys 0m0.862s 00:08:44.665 06:21:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.665 ************************************ 00:08:44.665 END TEST bdev_qos 00:08:44.665 ************************************ 00:08:44.665 06:21:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:44.665 06:21:57 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:44.665 06:21:57 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:08:44.665 06:21:57 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.665 06:21:57 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.665 06:21:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:44.665 ************************************ 00:08:44.665 START TEST bdev_qd_sampling 00:08:44.665 ************************************ 00:08:44.665 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:08:44.665 06:21:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=48411 00:08:44.924 Process bdev QD sampling period testing pid: 48411 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 48411' 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 48411 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48411 ']' 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.924 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.925 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.925 06:21:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:44.925 [2024-07-23 06:21:57.188364] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:44.925 [2024-07-23 06:21:57.188605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:45.493 EAL: TSC is not safe to use in SMP mode 00:08:45.493 EAL: TSC is not invariant 00:08:45.493 [2024-07-23 06:21:57.740761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.493 [2024-07-23 06:21:57.838720] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:45.493 [2024-07-23 06:21:57.838783] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:45.493 [2024-07-23 06:21:57.841984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.493 [2024-07-23 06:21:57.841974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:45.751 Malloc_QD 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.751 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:08:45.752 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.752 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:45.752 [ 00:08:45.752 { 00:08:45.752 "name": "Malloc_QD", 00:08:45.752 "aliases": [ 00:08:45.752 "dd5c7a18-48bb-11ef-a06c-59ddad71024c" 00:08:45.752 ], 00:08:45.752 "product_name": "Malloc disk", 00:08:45.752 "block_size": 512, 00:08:45.752 "num_blocks": 262144, 00:08:45.752 "uuid": "dd5c7a18-48bb-11ef-a06c-59ddad71024c", 00:08:45.752 "assigned_rate_limits": { 00:08:45.752 "rw_ios_per_sec": 0, 00:08:45.752 "rw_mbytes_per_sec": 0, 00:08:45.752 "r_mbytes_per_sec": 0, 00:08:45.752 "w_mbytes_per_sec": 0 00:08:45.752 }, 00:08:45.752 "claimed": false, 00:08:45.752 "zoned": false, 00:08:45.752 "supported_io_types": { 00:08:45.752 "read": true, 00:08:45.752 "write": true, 00:08:45.752 "unmap": true, 00:08:45.752 "flush": true, 00:08:45.752 "reset": true, 00:08:45.752 "nvme_admin": false, 00:08:45.752 "nvme_io": false, 00:08:45.752 "nvme_io_md": false, 00:08:45.752 "write_zeroes": true, 00:08:45.752 "zcopy": true, 00:08:45.752 "get_zone_info": false, 00:08:45.752 "zone_management": false, 00:08:45.752 "zone_append": false, 00:08:45.752 "compare": false, 00:08:45.752 "compare_and_write": false, 00:08:45.752 "abort": true, 00:08:45.752 "seek_hole": false, 00:08:45.752 "seek_data": false, 00:08:45.752 "copy": true, 00:08:45.752 "nvme_iov_md": false 00:08:45.752 }, 00:08:45.752 "memory_domains": [ 00:08:45.752 { 00:08:45.752 "dma_device_id": "system", 00:08:45.752 "dma_device_type": 1 00:08:45.752 }, 00:08:45.752 { 00:08:45.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.752 "dma_device_type": 2 00:08:45.752 } 00:08:45.752 ], 00:08:45.752 "driver_specific": {} 00:08:45.752 } 00:08:45.752 ] 00:08:45.752 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.752 06:21:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:08:45.752 06:21:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:08:45.752 06:21:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:46.010 Running I/O for 5 seconds... 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:47.949 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:08:47.950 "tick_rate": 2199994391, 00:08:47.950 "ticks": 738870777860, 00:08:47.950 "bdevs": [ 00:08:47.950 { 00:08:47.950 "name": "Malloc_QD", 00:08:47.950 "bytes_read": 12716118528, 00:08:47.950 "num_read_ops": 3104515, 00:08:47.950 "bytes_written": 0, 00:08:47.950 "num_write_ops": 0, 00:08:47.950 "bytes_unmapped": 0, 00:08:47.950 "num_unmap_ops": 0, 00:08:47.950 "bytes_copied": 0, 00:08:47.950 "num_copy_ops": 0, 00:08:47.950 "read_latency_ticks": 2266668975939, 00:08:47.950 "max_read_latency_ticks": 1367712, 00:08:47.950 "min_read_latency_ticks": 39051, 00:08:47.950 "write_latency_ticks": 0, 00:08:47.950 "max_write_latency_ticks": 0, 00:08:47.950 "min_write_latency_ticks": 0, 00:08:47.950 "unmap_latency_ticks": 0, 00:08:47.950 "max_unmap_latency_ticks": 0, 00:08:47.950 "min_unmap_latency_ticks": 0, 00:08:47.950 "copy_latency_ticks": 0, 00:08:47.950 "max_copy_latency_ticks": 0, 00:08:47.950 "min_copy_latency_ticks": 0, 00:08:47.950 "io_error": {}, 00:08:47.950 "queue_depth_polling_period": 10, 00:08:47.950 "queue_depth": 512, 00:08:47.950 "io_time": 370, 00:08:47.950 "weighted_io_time": 189440 00:08:47.950 } 00:08:47.950 ] 00:08:47.950 }' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:47.950 00:08:47.950 Latency(us) 00:08:47.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.950 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:08:47.950 Malloc_QD : 2.04 766123.24 2992.67 0.00 0.00 333.88 57.48 621.85 00:08:47.950 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:47.950 Malloc_QD : 2.04 774818.93 3026.64 0.00 0.00 330.13 55.39 554.82 00:08:47.950 =================================================================================================================== 00:08:47.950 Total : 1540942.18 6019.31 0.00 0.00 331.99 55.39 621.85 00:08:47.950 0 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 48411 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48411 ']' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48411 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48411 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:47.950 killing process with pid 48411 00:08:47.950 Received shutdown signal, test time was about 2.071443 seconds 00:08:47.950 00:08:47.950 Latency(us) 00:08:47.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.950 =================================================================================================================== 00:08:47.950 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48411' 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48411 00:08:47.950 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48411 00:08:48.211 06:22:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:08:48.211 00:08:48.211 real 0m3.407s 00:08:48.211 user 0m5.957s 00:08:48.211 sys 0m0.718s 00:08:48.211 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.211 06:22:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:48.211 ************************************ 00:08:48.211 END TEST bdev_qd_sampling 00:08:48.211 ************************************ 00:08:48.211 06:22:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:48.211 06:22:00 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:08:48.211 06:22:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.211 06:22:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.211 06:22:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:48.211 ************************************ 00:08:48.211 START TEST bdev_error 00:08:48.211 ************************************ 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=48454 00:08:48.211 Process error testing pid: 48454 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 48454' 00:08:48.211 06:22:00 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 48454 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48454 ']' 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.211 06:22:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:48.211 [2024-07-23 06:22:00.646404] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:48.211 [2024-07-23 06:22:00.646701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:48.779 EAL: TSC is not safe to use in SMP mode 00:08:48.779 EAL: TSC is not invariant 00:08:48.779 [2024-07-23 06:22:01.199245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.037 [2024-07-23 06:22:01.298075] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:49.037 [2024-07-23 06:22:01.300594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:08:49.296 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.296 Dev_1 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.296 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.296 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 [ 00:08:49.555 { 00:08:49.555 "name": "Dev_1", 00:08:49.555 "aliases": [ 00:08:49.555 "df85e4d5-48bb-11ef-a06c-59ddad71024c" 00:08:49.555 ], 00:08:49.555 "product_name": "Malloc disk", 00:08:49.555 "block_size": 512, 00:08:49.555 "num_blocks": 262144, 00:08:49.555 "uuid": "df85e4d5-48bb-11ef-a06c-59ddad71024c", 00:08:49.555 "assigned_rate_limits": { 00:08:49.555 "rw_ios_per_sec": 0, 00:08:49.555 "rw_mbytes_per_sec": 0, 00:08:49.555 "r_mbytes_per_sec": 0, 00:08:49.555 "w_mbytes_per_sec": 0 00:08:49.555 }, 00:08:49.555 "claimed": false, 00:08:49.555 "zoned": false, 00:08:49.555 "supported_io_types": { 00:08:49.555 "read": true, 00:08:49.555 "write": true, 00:08:49.555 "unmap": true, 00:08:49.555 "flush": true, 00:08:49.555 "reset": true, 00:08:49.555 "nvme_admin": false, 00:08:49.555 "nvme_io": false, 00:08:49.555 "nvme_io_md": false, 00:08:49.555 "write_zeroes": true, 00:08:49.555 "zcopy": true, 00:08:49.555 "get_zone_info": false, 00:08:49.555 "zone_management": false, 00:08:49.555 "zone_append": false, 00:08:49.555 "compare": false, 00:08:49.555 "compare_and_write": false, 00:08:49.555 "abort": true, 00:08:49.555 "seek_hole": false, 00:08:49.555 "seek_data": false, 00:08:49.555 "copy": true, 00:08:49.555 "nvme_iov_md": false 00:08:49.555 }, 00:08:49.555 "memory_domains": [ 00:08:49.555 { 00:08:49.555 "dma_device_id": "system", 00:08:49.555 "dma_device_type": 1 00:08:49.555 }, 00:08:49.555 { 00:08:49.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.555 "dma_device_type": 2 00:08:49.555 } 00:08:49.555 ], 00:08:49.555 "driver_specific": {} 00:08:49.555 } 00:08:49.555 ] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:08:49.555 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 true 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 Dev_2 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 [ 00:08:49.555 { 00:08:49.555 "name": "Dev_2", 00:08:49.555 "aliases": [ 00:08:49.555 "df8bfeba-48bb-11ef-a06c-59ddad71024c" 00:08:49.555 ], 00:08:49.555 "product_name": "Malloc disk", 00:08:49.555 "block_size": 512, 00:08:49.555 "num_blocks": 262144, 00:08:49.555 "uuid": "df8bfeba-48bb-11ef-a06c-59ddad71024c", 00:08:49.555 "assigned_rate_limits": { 00:08:49.555 "rw_ios_per_sec": 0, 00:08:49.555 "rw_mbytes_per_sec": 0, 00:08:49.555 "r_mbytes_per_sec": 0, 00:08:49.555 "w_mbytes_per_sec": 0 00:08:49.555 }, 00:08:49.555 "claimed": false, 00:08:49.555 "zoned": false, 00:08:49.555 "supported_io_types": { 00:08:49.555 "read": true, 00:08:49.555 "write": true, 00:08:49.555 "unmap": true, 00:08:49.555 "flush": true, 00:08:49.555 "reset": true, 00:08:49.555 "nvme_admin": false, 00:08:49.555 "nvme_io": false, 00:08:49.555 "nvme_io_md": false, 00:08:49.555 "write_zeroes": true, 00:08:49.555 "zcopy": true, 00:08:49.555 "get_zone_info": false, 00:08:49.555 "zone_management": false, 00:08:49.555 "zone_append": false, 00:08:49.555 "compare": false, 00:08:49.555 "compare_and_write": false, 00:08:49.555 "abort": true, 00:08:49.555 "seek_hole": false, 00:08:49.555 "seek_data": false, 00:08:49.555 "copy": true, 00:08:49.555 "nvme_iov_md": false 00:08:49.555 }, 00:08:49.555 "memory_domains": [ 00:08:49.555 { 00:08:49.555 "dma_device_id": "system", 00:08:49.555 "dma_device_type": 1 00:08:49.555 }, 00:08:49.555 { 00:08:49.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.555 "dma_device_type": 2 00:08:49.555 } 00:08:49.555 ], 00:08:49.555 "driver_specific": {} 00:08:49.555 } 00:08:49.555 ] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:08:49.555 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 06:22:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.555 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:08:49.555 06:22:01 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:08:49.555 Running I/O for 5 seconds... 00:08:50.490 06:22:02 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 48454 00:08:50.490 Process is existed as continue on error is set. Pid: 48454 00:08:50.490 06:22:02 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 48454' 00:08:50.490 06:22:02 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:08:50.490 06:22:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.490 06:22:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:50.490 06:22:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.490 06:22:02 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:08:50.490 06:22:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.490 06:22:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:50.490 06:22:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.490 06:22:02 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:08:50.750 Timeout while waiting for response: 00:08:50.750 00:08:50.750 00:08:54.934 00:08:54.934 Latency(us) 00:08:54.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.934 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:54.934 EE_Dev_1 : 0.98 306852.55 1198.64 5.12 0.00 51.90 30.95 182.46 00:08:54.934 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:54.934 Dev_2 : 5.00 699585.29 2732.76 0.00 0.00 22.65 5.67 22639.77 00:08:54.934 =================================================================================================================== 00:08:54.934 Total : 1006437.84 3931.40 5.12 0.00 24.95 5.67 22639.77 00:08:55.869 06:22:08 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 48454 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48454 ']' 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48454 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48454 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:55.869 killing process with pid 48454 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48454' 00:08:55.869 Received shutdown signal, test time was about 5.000000 seconds 00:08:55.869 00:08:55.869 Latency(us) 00:08:55.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.869 =================================================================================================================== 00:08:55.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48454 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48454 00:08:55.869 06:22:08 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=48494 00:08:55.869 06:22:08 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:08:55.869 Process error testing pid: 48494 00:08:55.869 06:22:08 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 48494' 00:08:55.869 06:22:08 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 48494 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48494 ']' 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.869 06:22:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:55.869 [2024-07-23 06:22:08.256283] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:55.869 [2024-07-23 06:22:08.256525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:56.436 EAL: TSC is not safe to use in SMP mode 00:08:56.436 EAL: TSC is not invariant 00:08:56.436 [2024-07-23 06:22:08.825593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.436 [2024-07-23 06:22:08.916584] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:56.436 [2024-07-23 06:22:08.918716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:08:57.002 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.002 Dev_1 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.002 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.002 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.003 [ 00:08:57.003 { 00:08:57.003 "name": "Dev_1", 00:08:57.003 "aliases": [ 00:08:57.003 "e408cc85-48bb-11ef-a06c-59ddad71024c" 00:08:57.003 ], 00:08:57.003 "product_name": "Malloc disk", 00:08:57.003 "block_size": 512, 00:08:57.003 "num_blocks": 262144, 00:08:57.003 "uuid": "e408cc85-48bb-11ef-a06c-59ddad71024c", 00:08:57.003 "assigned_rate_limits": { 00:08:57.003 "rw_ios_per_sec": 0, 00:08:57.003 "rw_mbytes_per_sec": 0, 00:08:57.003 "r_mbytes_per_sec": 0, 00:08:57.003 "w_mbytes_per_sec": 0 00:08:57.003 }, 00:08:57.003 "claimed": false, 00:08:57.003 "zoned": false, 00:08:57.003 "supported_io_types": { 00:08:57.003 "read": true, 00:08:57.003 "write": true, 00:08:57.003 "unmap": true, 00:08:57.003 "flush": true, 00:08:57.003 "reset": true, 00:08:57.003 "nvme_admin": false, 00:08:57.003 "nvme_io": false, 00:08:57.003 "nvme_io_md": false, 00:08:57.003 "write_zeroes": true, 00:08:57.003 "zcopy": true, 00:08:57.003 "get_zone_info": false, 00:08:57.003 "zone_management": false, 00:08:57.003 "zone_append": false, 00:08:57.003 "compare": false, 00:08:57.003 "compare_and_write": false, 00:08:57.003 "abort": true, 00:08:57.003 "seek_hole": false, 00:08:57.003 "seek_data": false, 00:08:57.003 "copy": true, 00:08:57.003 "nvme_iov_md": false 00:08:57.003 }, 00:08:57.003 "memory_domains": [ 00:08:57.003 { 00:08:57.003 "dma_device_id": "system", 00:08:57.003 "dma_device_type": 1 00:08:57.003 }, 00:08:57.003 { 00:08:57.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.003 "dma_device_type": 2 00:08:57.003 } 00:08:57.003 ], 00:08:57.003 "driver_specific": {} 00:08:57.003 } 00:08:57.003 ] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:08:57.003 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.003 true 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.003 Dev_2 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.003 [ 00:08:57.003 { 00:08:57.003 "name": "Dev_2", 00:08:57.003 "aliases": [ 00:08:57.003 "e40ee771-48bb-11ef-a06c-59ddad71024c" 00:08:57.003 ], 00:08:57.003 "product_name": "Malloc disk", 00:08:57.003 "block_size": 512, 00:08:57.003 "num_blocks": 262144, 00:08:57.003 "uuid": "e40ee771-48bb-11ef-a06c-59ddad71024c", 00:08:57.003 "assigned_rate_limits": { 00:08:57.003 "rw_ios_per_sec": 0, 00:08:57.003 "rw_mbytes_per_sec": 0, 00:08:57.003 "r_mbytes_per_sec": 0, 00:08:57.003 "w_mbytes_per_sec": 0 00:08:57.003 }, 00:08:57.003 "claimed": false, 00:08:57.003 "zoned": false, 00:08:57.003 "supported_io_types": { 00:08:57.003 "read": true, 00:08:57.003 "write": true, 00:08:57.003 "unmap": true, 00:08:57.003 "flush": true, 00:08:57.003 "reset": true, 00:08:57.003 "nvme_admin": false, 00:08:57.003 "nvme_io": false, 00:08:57.003 "nvme_io_md": false, 00:08:57.003 "write_zeroes": true, 00:08:57.003 "zcopy": true, 00:08:57.003 "get_zone_info": false, 00:08:57.003 "zone_management": false, 00:08:57.003 "zone_append": false, 00:08:57.003 "compare": false, 00:08:57.003 "compare_and_write": false, 00:08:57.003 "abort": true, 00:08:57.003 "seek_hole": false, 00:08:57.003 "seek_data": false, 00:08:57.003 "copy": true, 00:08:57.003 "nvme_iov_md": false 00:08:57.003 }, 00:08:57.003 "memory_domains": [ 00:08:57.003 { 00:08:57.003 "dma_device_id": "system", 00:08:57.003 "dma_device_type": 1 00:08:57.003 }, 00:08:57.003 { 00:08:57.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.003 "dma_device_type": 2 00:08:57.003 } 00:08:57.003 ], 00:08:57.003 "driver_specific": {} 00:08:57.003 } 00:08:57.003 ] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:08:57.003 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.003 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 48494 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48494 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:08:57.003 06:22:09 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.003 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48494 00:08:57.262 Running I/O for 5 seconds... 00:08:57.262 task offset: 131296 on job bdev=EE_Dev_1 fails 00:08:57.262 00:08:57.262 Latency(us) 00:08:57.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.262 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:57.262 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:08:57.262 EE_Dev_1 : 0.00 140127.39 547.37 31847.13 0.00 70.81 23.62 147.08 00:08:57.262 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:57.262 Dev_2 : 0.00 174863.39 683.06 0.00 0.00 42.80 34.44 51.43 00:08:57.262 =================================================================================================================== 00:08:57.262 Total : 314990.78 1230.43 31847.13 0.00 55.62 23.62 147.08 00:08:57.262 [2024-07-23 06:22:09.546607] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.262 request: 00:08:57.262 { 00:08:57.262 "method": "perform_tests", 00:08:57.262 "req_id": 1 00:08:57.262 } 00:08:57.262 Got JSON-RPC error response 00:08:57.262 response: 00:08:57.262 { 00:08:57.262 "code": -32603, 00:08:57.262 "message": "bdevperf failed with error Operation not permitted" 00:08:57.262 } 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.262 00:08:57.262 real 0m9.138s 00:08:57.262 user 0m9.231s 00:08:57.262 sys 0m1.367s 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.262 06:22:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:57.262 ************************************ 00:08:57.262 END TEST bdev_error 00:08:57.262 ************************************ 00:08:57.521 06:22:09 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:57.521 06:22:09 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:08:57.521 06:22:09 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.521 06:22:09 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.521 06:22:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:57.521 ************************************ 00:08:57.521 START TEST bdev_stat 00:08:57.521 ************************************ 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=48525 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 48525' 00:08:57.521 Process Bdev IO statistics testing pid: 48525 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 48525 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48525 ']' 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:08:57.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.521 06:22:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:57.521 [2024-07-23 06:22:09.831807] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:57.521 [2024-07-23 06:22:09.832088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:58.087 EAL: TSC is not safe to use in SMP mode 00:08:58.087 EAL: TSC is not invariant 00:08:58.087 [2024-07-23 06:22:10.374593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.087 [2024-07-23 06:22:10.460323] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:58.087 [2024-07-23 06:22:10.460381] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:58.087 [2024-07-23 06:22:10.463016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.087 [2024-07-23 06:22:10.463008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:58.667 Malloc_STAT 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:58.667 [ 00:08:58.667 { 00:08:58.667 "name": "Malloc_STAT", 00:08:58.667 "aliases": [ 00:08:58.667 "e4f7f036-48bb-11ef-a06c-59ddad71024c" 00:08:58.667 ], 00:08:58.667 "product_name": "Malloc disk", 00:08:58.667 "block_size": 512, 00:08:58.667 "num_blocks": 262144, 00:08:58.667 "uuid": "e4f7f036-48bb-11ef-a06c-59ddad71024c", 00:08:58.667 "assigned_rate_limits": { 00:08:58.667 "rw_ios_per_sec": 0, 00:08:58.667 "rw_mbytes_per_sec": 0, 00:08:58.667 "r_mbytes_per_sec": 0, 00:08:58.667 "w_mbytes_per_sec": 0 00:08:58.667 }, 00:08:58.667 "claimed": false, 00:08:58.667 "zoned": false, 00:08:58.667 "supported_io_types": { 00:08:58.667 "read": true, 00:08:58.667 "write": true, 00:08:58.667 "unmap": true, 00:08:58.667 "flush": true, 00:08:58.667 "reset": true, 00:08:58.667 "nvme_admin": false, 00:08:58.667 "nvme_io": false, 00:08:58.667 "nvme_io_md": false, 00:08:58.667 "write_zeroes": true, 00:08:58.667 "zcopy": true, 00:08:58.667 "get_zone_info": false, 00:08:58.667 "zone_management": false, 00:08:58.667 "zone_append": false, 00:08:58.667 "compare": false, 00:08:58.667 "compare_and_write": false, 00:08:58.667 "abort": true, 00:08:58.667 "seek_hole": false, 00:08:58.667 "seek_data": false, 00:08:58.667 "copy": true, 00:08:58.667 "nvme_iov_md": false 00:08:58.667 }, 00:08:58.667 "memory_domains": [ 00:08:58.667 { 00:08:58.667 "dma_device_id": "system", 00:08:58.667 "dma_device_type": 1 00:08:58.667 }, 00:08:58.667 { 00:08:58.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.667 "dma_device_type": 2 00:08:58.667 } 00:08:58.667 ], 00:08:58.667 "driver_specific": {} 00:08:58.667 } 00:08:58.667 ] 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:08:58.667 06:22:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.667 Running I/O for 10 seconds... 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.572 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:09:00.572 "tick_rate": 2199994391, 00:09:00.572 "ticks": 766669955604, 00:09:00.572 "bdevs": [ 00:09:00.572 { 00:09:00.572 "name": "Malloc_STAT", 00:09:00.572 "bytes_read": 11986309632, 00:09:00.572 "num_read_ops": 2926339, 00:09:00.572 "bytes_written": 0, 00:09:00.572 "num_write_ops": 0, 00:09:00.572 "bytes_unmapped": 0, 00:09:00.572 "num_unmap_ops": 0, 00:09:00.572 "bytes_copied": 0, 00:09:00.572 "num_copy_ops": 0, 00:09:00.572 "read_latency_ticks": 2133023448402, 00:09:00.572 "max_read_latency_ticks": 1422027, 00:09:00.572 "min_read_latency_ticks": 37271, 00:09:00.572 "write_latency_ticks": 0, 00:09:00.572 "max_write_latency_ticks": 0, 00:09:00.572 "min_write_latency_ticks": 0, 00:09:00.572 "unmap_latency_ticks": 0, 00:09:00.572 "max_unmap_latency_ticks": 0, 00:09:00.572 "min_unmap_latency_ticks": 0, 00:09:00.572 "copy_latency_ticks": 0, 00:09:00.572 "max_copy_latency_ticks": 0, 00:09:00.572 "min_copy_latency_ticks": 0, 00:09:00.572 "io_error": {} 00:09:00.572 } 00:09:00.573 ] 00:09:00.573 }' 00:09:00.573 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:09:00.573 06:22:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=2926339 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:09:00.573 "tick_rate": 2199994391, 00:09:00.573 "ticks": 766725478033, 00:09:00.573 "name": "Malloc_STAT", 00:09:00.573 "channels": [ 00:09:00.573 { 00:09:00.573 "thread_id": 2, 00:09:00.573 "bytes_read": 6101663744, 00:09:00.573 "num_read_ops": 1489664, 00:09:00.573 "bytes_written": 0, 00:09:00.573 "num_write_ops": 0, 00:09:00.573 "bytes_unmapped": 0, 00:09:00.573 "num_unmap_ops": 0, 00:09:00.573 "bytes_copied": 0, 00:09:00.573 "num_copy_ops": 0, 00:09:00.573 "read_latency_ticks": 1080666161437, 00:09:00.573 "max_read_latency_ticks": 1422027, 00:09:00.573 "min_read_latency_ticks": 617110, 00:09:00.573 "write_latency_ticks": 0, 00:09:00.573 "max_write_latency_ticks": 0, 00:09:00.573 "min_write_latency_ticks": 0, 00:09:00.573 "unmap_latency_ticks": 0, 00:09:00.573 "max_unmap_latency_ticks": 0, 00:09:00.573 "min_unmap_latency_ticks": 0, 00:09:00.573 "copy_latency_ticks": 0, 00:09:00.573 "max_copy_latency_ticks": 0, 00:09:00.573 "min_copy_latency_ticks": 0 00:09:00.573 }, 00:09:00.573 { 00:09:00.573 "thread_id": 3, 00:09:00.573 "bytes_read": 6045040640, 00:09:00.573 "num_read_ops": 1475840, 00:09:00.573 "bytes_written": 0, 00:09:00.573 "num_write_ops": 0, 00:09:00.573 "bytes_unmapped": 0, 00:09:00.573 "num_unmap_ops": 0, 00:09:00.573 "bytes_copied": 0, 00:09:00.573 "num_copy_ops": 0, 00:09:00.573 "read_latency_ticks": 1080791773484, 00:09:00.573 "max_read_latency_ticks": 1273683, 00:09:00.573 "min_read_latency_ticks": 633454, 00:09:00.573 "write_latency_ticks": 0, 00:09:00.573 "max_write_latency_ticks": 0, 00:09:00.573 "min_write_latency_ticks": 0, 00:09:00.573 "unmap_latency_ticks": 0, 00:09:00.573 "max_unmap_latency_ticks": 0, 00:09:00.573 "min_unmap_latency_ticks": 0, 00:09:00.573 "copy_latency_ticks": 0, 00:09:00.573 "max_copy_latency_ticks": 0, 00:09:00.573 "min_copy_latency_ticks": 0 00:09:00.573 } 00:09:00.573 ] 00:09:00.573 }' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=1489664 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=1489664 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=1475840 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=2965504 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:09:00.573 "tick_rate": 2199994391, 00:09:00.573 "ticks": 766798846152, 00:09:00.573 "bdevs": [ 00:09:00.573 { 00:09:00.573 "name": "Malloc_STAT", 00:09:00.573 "bytes_read": 12362748416, 00:09:00.573 "num_read_ops": 3018243, 00:09:00.573 "bytes_written": 0, 00:09:00.573 "num_write_ops": 0, 00:09:00.573 "bytes_unmapped": 0, 00:09:00.573 "num_unmap_ops": 0, 00:09:00.573 "bytes_copied": 0, 00:09:00.573 "num_copy_ops": 0, 00:09:00.573 "read_latency_ticks": 2198941422146, 00:09:00.573 "max_read_latency_ticks": 1422027, 00:09:00.573 "min_read_latency_ticks": 37271, 00:09:00.573 "write_latency_ticks": 0, 00:09:00.573 "max_write_latency_ticks": 0, 00:09:00.573 "min_write_latency_ticks": 0, 00:09:00.573 "unmap_latency_ticks": 0, 00:09:00.573 "max_unmap_latency_ticks": 0, 00:09:00.573 "min_unmap_latency_ticks": 0, 00:09:00.573 "copy_latency_ticks": 0, 00:09:00.573 "max_copy_latency_ticks": 0, 00:09:00.573 "min_copy_latency_ticks": 0, 00:09:00.573 "io_error": {} 00:09:00.573 } 00:09:00.573 ] 00:09:00.573 }' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=3018243 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 2965504 -lt 2926339 ']' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 2965504 -gt 3018243 ']' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:00.573 00:09:00.573 Latency(us) 00:09:00.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.573 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:00.573 Malloc_STAT : 1.98 776366.02 3032.68 0.00 0.00 329.48 53.99 647.91 00:09:00.573 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:00.573 Malloc_STAT : 1.98 768921.83 3003.60 0.00 0.00 332.67 53.29 580.89 00:09:00.573 =================================================================================================================== 00:09:00.573 Total : 1545287.85 6036.28 0.00 0.00 331.06 53.29 647.91 00:09:00.573 0 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 48525 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48525 ']' 00:09:00.573 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48525 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48525 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:00.868 killing process with pid 48525 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48525' 00:09:00.868 Received shutdown signal, test time was about 2.012285 seconds 00:09:00.868 00:09:00.868 Latency(us) 00:09:00.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.868 =================================================================================================================== 00:09:00.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48525 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48525 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:09:00.868 00:09:00.868 real 0m3.455s 00:09:00.868 user 0m6.255s 00:09:00.868 sys 0m0.677s 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.868 06:22:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:00.868 ************************************ 00:09:00.868 END TEST bdev_stat 00:09:00.868 ************************************ 00:09:00.868 06:22:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:00.868 06:22:13 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:00.868 00:09:00.868 real 1m33.611s 00:09:00.868 user 4m29.267s 00:09:00.868 sys 0m26.864s 00:09:00.868 06:22:13 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.868 06:22:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:00.868 ************************************ 00:09:00.868 END TEST blockdev_general 00:09:00.868 ************************************ 00:09:00.868 06:22:13 -- common/autotest_common.sh@1142 -- # return 0 00:09:00.868 06:22:13 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:00.868 06:22:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.868 06:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.868 06:22:13 -- common/autotest_common.sh@10 -- # set +x 00:09:00.868 ************************************ 00:09:00.868 START TEST bdev_raid 00:09:00.868 ************************************ 00:09:00.868 06:22:13 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:01.127 * Looking for test storage... 00:09:01.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:01.127 06:22:13 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:09:01.127 06:22:13 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:01.127 06:22:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.127 06:22:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.127 06:22:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.127 ************************************ 00:09:01.127 START TEST raid0_resize_test 00:09:01.127 ************************************ 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48626 00:09:01.127 Process raid pid: 48626 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48626' 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48626 /var/tmp/spdk-raid.sock 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48626 ']' 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.127 06:22:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.127 [2024-07-23 06:22:13.516438] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:01.127 [2024-07-23 06:22:13.516611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:01.694 EAL: TSC is not safe to use in SMP mode 00:09:01.694 EAL: TSC is not invariant 00:09:01.694 [2024-07-23 06:22:14.072956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.694 [2024-07-23 06:22:14.154635] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:01.694 [2024-07-23 06:22:14.156928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.695 [2024-07-23 06:22:14.157739] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.695 [2024-07-23 06:22:14.157752] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.262 06:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.262 06:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:09:02.262 06:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:02.520 Base_1 00:09:02.520 06:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:02.779 Base_2 00:09:02.779 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:03.052 [2024-07-23 06:22:15.365799] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:03.052 [2024-07-23 06:22:15.366367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:03.052 [2024-07-23 06:22:15.366392] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x18e0ee34a00 00:09:03.052 [2024-07-23 06:22:15.366396] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:03.052 [2024-07-23 06:22:15.366431] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18e0ee97e20 00:09:03.052 [2024-07-23 06:22:15.366494] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18e0ee34a00 00:09:03.052 [2024-07-23 06:22:15.366499] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x18e0ee34a00 00:09:03.052 [2024-07-23 06:22:15.366534] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.052 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:03.322 [2024-07-23 06:22:15.597809] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:03.322 [2024-07-23 06:22:15.597843] bdev_raid.c:2302:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:03.322 true 00:09:03.322 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:03.322 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:03.580 [2024-07-23 06:22:15.877844] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.580 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:03.580 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:03.580 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:03.580 06:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:03.839 [2024-07-23 06:22:16.145817] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:03.839 [2024-07-23 06:22:16.145846] bdev_raid.c:2302:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:03.839 [2024-07-23 06:22:16.145878] bdev_raid.c:2316:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:03.839 true 00:09:03.839 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:03.839 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:04.144 [2024-07-23 06:22:16.417861] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48626 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48626 ']' 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48626 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48626 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:04.144 killing process with pid 48626 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48626' 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48626 00:09:04.144 [2024-07-23 06:22:16.445168] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.144 [2024-07-23 06:22:16.445193] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.144 [2024-07-23 06:22:16.445205] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.144 [2024-07-23 06:22:16.445209] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18e0ee34a00 name Raid, state offline 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48626 00:09:04.144 [2024-07-23 06:22:16.445337] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:04.144 00:09:04.144 real 0m3.118s 00:09:04.144 user 0m4.617s 00:09:04.144 sys 0m0.842s 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.144 06:22:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 ************************************ 00:09:04.144 END TEST raid0_resize_test 00:09:04.144 ************************************ 00:09:04.404 06:22:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:04.404 06:22:16 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:04.404 06:22:16 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:04.404 06:22:16 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:04.404 06:22:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:04.404 06:22:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.404 06:22:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.404 ************************************ 00:09:04.404 START TEST raid_state_function_test 00:09:04.404 ************************************ 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48676 00:09:04.404 Process raid pid: 48676 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48676' 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48676 /var/tmp/spdk-raid.sock 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48676 ']' 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:04.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.404 06:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.404 [2024-07-23 06:22:16.680943] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:04.404 [2024-07-23 06:22:16.681168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:04.972 EAL: TSC is not safe to use in SMP mode 00:09:04.972 EAL: TSC is not invariant 00:09:04.972 [2024-07-23 06:22:17.202191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.972 [2024-07-23 06:22:17.293601] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:04.972 [2024-07-23 06:22:17.295872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.972 [2024-07-23 06:22:17.296683] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.972 [2024-07-23 06:22:17.296697] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.231 06:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.231 06:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:05.231 06:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:05.490 [2024-07-23 06:22:18.001686] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.490 [2024-07-23 06:22:18.001753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.490 [2024-07-23 06:22:18.001759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.490 [2024-07-23 06:22:18.001783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.749 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:05.749 "name": "Existed_Raid", 00:09:05.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.749 "strip_size_kb": 64, 00:09:05.749 "state": "configuring", 00:09:05.749 "raid_level": "raid0", 00:09:05.749 "superblock": false, 00:09:05.749 "num_base_bdevs": 2, 00:09:05.750 "num_base_bdevs_discovered": 0, 00:09:05.750 "num_base_bdevs_operational": 2, 00:09:05.750 "base_bdevs_list": [ 00:09:05.750 { 00:09:05.750 "name": "BaseBdev1", 00:09:05.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.750 "is_configured": false, 00:09:05.750 "data_offset": 0, 00:09:05.750 "data_size": 0 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "name": "BaseBdev2", 00:09:05.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.750 "is_configured": false, 00:09:05.750 "data_offset": 0, 00:09:05.750 "data_size": 0 00:09:05.750 } 00:09:05.750 ] 00:09:05.750 }' 00:09:05.750 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:05.750 06:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.317 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:06.317 [2024-07-23 06:22:18.801757] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.317 [2024-07-23 06:22:18.801787] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21426f634500 name Existed_Raid, state configuring 00:09:06.317 06:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:06.576 [2024-07-23 06:22:19.037761] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.576 [2024-07-23 06:22:19.037813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.576 [2024-07-23 06:22:19.037818] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.576 [2024-07-23 06:22:19.037827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.576 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.835 [2024-07-23 06:22:19.274797] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.835 BaseBdev1 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:06.835 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:07.094 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.368 [ 00:09:07.368 { 00:09:07.368 "name": "BaseBdev1", 00:09:07.368 "aliases": [ 00:09:07.368 "e9f0e59a-48bb-11ef-a06c-59ddad71024c" 00:09:07.368 ], 00:09:07.368 "product_name": "Malloc disk", 00:09:07.368 "block_size": 512, 00:09:07.368 "num_blocks": 65536, 00:09:07.368 "uuid": "e9f0e59a-48bb-11ef-a06c-59ddad71024c", 00:09:07.369 "assigned_rate_limits": { 00:09:07.369 "rw_ios_per_sec": 0, 00:09:07.369 "rw_mbytes_per_sec": 0, 00:09:07.369 "r_mbytes_per_sec": 0, 00:09:07.369 "w_mbytes_per_sec": 0 00:09:07.369 }, 00:09:07.369 "claimed": true, 00:09:07.369 "claim_type": "exclusive_write", 00:09:07.369 "zoned": false, 00:09:07.369 "supported_io_types": { 00:09:07.369 "read": true, 00:09:07.369 "write": true, 00:09:07.369 "unmap": true, 00:09:07.369 "flush": true, 00:09:07.369 "reset": true, 00:09:07.369 "nvme_admin": false, 00:09:07.369 "nvme_io": false, 00:09:07.369 "nvme_io_md": false, 00:09:07.369 "write_zeroes": true, 00:09:07.369 "zcopy": true, 00:09:07.369 "get_zone_info": false, 00:09:07.369 "zone_management": false, 00:09:07.369 "zone_append": false, 00:09:07.369 "compare": false, 00:09:07.369 "compare_and_write": false, 00:09:07.369 "abort": true, 00:09:07.369 "seek_hole": false, 00:09:07.369 "seek_data": false, 00:09:07.369 "copy": true, 00:09:07.369 "nvme_iov_md": false 00:09:07.369 }, 00:09:07.369 "memory_domains": [ 00:09:07.369 { 00:09:07.369 "dma_device_id": "system", 00:09:07.369 "dma_device_type": 1 00:09:07.369 }, 00:09:07.369 { 00:09:07.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.369 "dma_device_type": 2 00:09:07.369 } 00:09:07.369 ], 00:09:07.369 "driver_specific": {} 00:09:07.369 } 00:09:07.369 ] 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.369 06:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.627 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.627 "name": "Existed_Raid", 00:09:07.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.627 "strip_size_kb": 64, 00:09:07.627 "state": "configuring", 00:09:07.627 "raid_level": "raid0", 00:09:07.627 "superblock": false, 00:09:07.627 "num_base_bdevs": 2, 00:09:07.627 "num_base_bdevs_discovered": 1, 00:09:07.627 "num_base_bdevs_operational": 2, 00:09:07.627 "base_bdevs_list": [ 00:09:07.627 { 00:09:07.627 "name": "BaseBdev1", 00:09:07.627 "uuid": "e9f0e59a-48bb-11ef-a06c-59ddad71024c", 00:09:07.627 "is_configured": true, 00:09:07.627 "data_offset": 0, 00:09:07.627 "data_size": 65536 00:09:07.627 }, 00:09:07.627 { 00:09:07.627 "name": "BaseBdev2", 00:09:07.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.627 "is_configured": false, 00:09:07.627 "data_offset": 0, 00:09:07.627 "data_size": 0 00:09:07.627 } 00:09:07.627 ] 00:09:07.627 }' 00:09:07.627 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.627 06:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.194 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:08.194 [2024-07-23 06:22:20.673798] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.194 [2024-07-23 06:22:20.673831] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21426f634500 name Existed_Raid, state configuring 00:09:08.194 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:08.452 [2024-07-23 06:22:20.953845] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.452 [2024-07-23 06:22:20.954677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.452 [2024-07-23 06:22:20.954716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.711 06:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.969 06:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:08.969 "name": "Existed_Raid", 00:09:08.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.969 "strip_size_kb": 64, 00:09:08.969 "state": "configuring", 00:09:08.969 "raid_level": "raid0", 00:09:08.969 "superblock": false, 00:09:08.969 "num_base_bdevs": 2, 00:09:08.969 "num_base_bdevs_discovered": 1, 00:09:08.969 "num_base_bdevs_operational": 2, 00:09:08.969 "base_bdevs_list": [ 00:09:08.969 { 00:09:08.969 "name": "BaseBdev1", 00:09:08.969 "uuid": "e9f0e59a-48bb-11ef-a06c-59ddad71024c", 00:09:08.969 "is_configured": true, 00:09:08.969 "data_offset": 0, 00:09:08.969 "data_size": 65536 00:09:08.969 }, 00:09:08.969 { 00:09:08.969 "name": "BaseBdev2", 00:09:08.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.969 "is_configured": false, 00:09:08.970 "data_offset": 0, 00:09:08.970 "data_size": 0 00:09:08.970 } 00:09:08.970 ] 00:09:08.970 }' 00:09:08.970 06:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:08.970 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 06:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.496 [2024-07-23 06:22:21.761999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.496 [2024-07-23 06:22:21.762029] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x21426f634a00 00:09:09.496 [2024-07-23 06:22:21.762040] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:09.496 [2024-07-23 06:22:21.762064] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21426f697e20 00:09:09.496 [2024-07-23 06:22:21.762163] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21426f634a00 00:09:09.496 [2024-07-23 06:22:21.762170] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21426f634a00 00:09:09.496 [2024-07-23 06:22:21.762207] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.496 BaseBdev2 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:09.496 06:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:09.780 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.039 [ 00:09:10.039 { 00:09:10.039 "name": "BaseBdev2", 00:09:10.039 "aliases": [ 00:09:10.039 "eb6c8c9c-48bb-11ef-a06c-59ddad71024c" 00:09:10.039 ], 00:09:10.039 "product_name": "Malloc disk", 00:09:10.039 "block_size": 512, 00:09:10.039 "num_blocks": 65536, 00:09:10.039 "uuid": "eb6c8c9c-48bb-11ef-a06c-59ddad71024c", 00:09:10.039 "assigned_rate_limits": { 00:09:10.039 "rw_ios_per_sec": 0, 00:09:10.039 "rw_mbytes_per_sec": 0, 00:09:10.039 "r_mbytes_per_sec": 0, 00:09:10.039 "w_mbytes_per_sec": 0 00:09:10.039 }, 00:09:10.039 "claimed": true, 00:09:10.039 "claim_type": "exclusive_write", 00:09:10.039 "zoned": false, 00:09:10.039 "supported_io_types": { 00:09:10.039 "read": true, 00:09:10.039 "write": true, 00:09:10.039 "unmap": true, 00:09:10.039 "flush": true, 00:09:10.039 "reset": true, 00:09:10.039 "nvme_admin": false, 00:09:10.039 "nvme_io": false, 00:09:10.039 "nvme_io_md": false, 00:09:10.039 "write_zeroes": true, 00:09:10.039 "zcopy": true, 00:09:10.039 "get_zone_info": false, 00:09:10.039 "zone_management": false, 00:09:10.039 "zone_append": false, 00:09:10.039 "compare": false, 00:09:10.039 "compare_and_write": false, 00:09:10.039 "abort": true, 00:09:10.039 "seek_hole": false, 00:09:10.039 "seek_data": false, 00:09:10.039 "copy": true, 00:09:10.039 "nvme_iov_md": false 00:09:10.039 }, 00:09:10.039 "memory_domains": [ 00:09:10.039 { 00:09:10.039 "dma_device_id": "system", 00:09:10.039 "dma_device_type": 1 00:09:10.039 }, 00:09:10.039 { 00:09:10.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.039 "dma_device_type": 2 00:09:10.039 } 00:09:10.039 ], 00:09:10.039 "driver_specific": {} 00:09:10.039 } 00:09:10.039 ] 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.040 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.298 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:10.298 "name": "Existed_Raid", 00:09:10.298 "uuid": "eb6c93f0-48bb-11ef-a06c-59ddad71024c", 00:09:10.298 "strip_size_kb": 64, 00:09:10.298 "state": "online", 00:09:10.298 "raid_level": "raid0", 00:09:10.298 "superblock": false, 00:09:10.298 "num_base_bdevs": 2, 00:09:10.298 "num_base_bdevs_discovered": 2, 00:09:10.298 "num_base_bdevs_operational": 2, 00:09:10.298 "base_bdevs_list": [ 00:09:10.298 { 00:09:10.298 "name": "BaseBdev1", 00:09:10.298 "uuid": "e9f0e59a-48bb-11ef-a06c-59ddad71024c", 00:09:10.298 "is_configured": true, 00:09:10.298 "data_offset": 0, 00:09:10.298 "data_size": 65536 00:09:10.298 }, 00:09:10.298 { 00:09:10.298 "name": "BaseBdev2", 00:09:10.298 "uuid": "eb6c8c9c-48bb-11ef-a06c-59ddad71024c", 00:09:10.298 "is_configured": true, 00:09:10.298 "data_offset": 0, 00:09:10.298 "data_size": 65536 00:09:10.298 } 00:09:10.298 ] 00:09:10.298 }' 00:09:10.298 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:10.298 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:10.557 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:10.816 [2024-07-23 06:22:23.209925] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.816 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:10.816 "name": "Existed_Raid", 00:09:10.816 "aliases": [ 00:09:10.816 "eb6c93f0-48bb-11ef-a06c-59ddad71024c" 00:09:10.816 ], 00:09:10.816 "product_name": "Raid Volume", 00:09:10.816 "block_size": 512, 00:09:10.816 "num_blocks": 131072, 00:09:10.816 "uuid": "eb6c93f0-48bb-11ef-a06c-59ddad71024c", 00:09:10.816 "assigned_rate_limits": { 00:09:10.816 "rw_ios_per_sec": 0, 00:09:10.816 "rw_mbytes_per_sec": 0, 00:09:10.816 "r_mbytes_per_sec": 0, 00:09:10.816 "w_mbytes_per_sec": 0 00:09:10.816 }, 00:09:10.816 "claimed": false, 00:09:10.816 "zoned": false, 00:09:10.816 "supported_io_types": { 00:09:10.816 "read": true, 00:09:10.816 "write": true, 00:09:10.816 "unmap": true, 00:09:10.816 "flush": true, 00:09:10.816 "reset": true, 00:09:10.816 "nvme_admin": false, 00:09:10.816 "nvme_io": false, 00:09:10.816 "nvme_io_md": false, 00:09:10.816 "write_zeroes": true, 00:09:10.816 "zcopy": false, 00:09:10.816 "get_zone_info": false, 00:09:10.816 "zone_management": false, 00:09:10.816 "zone_append": false, 00:09:10.816 "compare": false, 00:09:10.816 "compare_and_write": false, 00:09:10.816 "abort": false, 00:09:10.816 "seek_hole": false, 00:09:10.816 "seek_data": false, 00:09:10.816 "copy": false, 00:09:10.816 "nvme_iov_md": false 00:09:10.816 }, 00:09:10.816 "memory_domains": [ 00:09:10.816 { 00:09:10.816 "dma_device_id": "system", 00:09:10.816 "dma_device_type": 1 00:09:10.816 }, 00:09:10.816 { 00:09:10.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.816 "dma_device_type": 2 00:09:10.816 }, 00:09:10.816 { 00:09:10.816 "dma_device_id": "system", 00:09:10.816 "dma_device_type": 1 00:09:10.816 }, 00:09:10.816 { 00:09:10.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.816 "dma_device_type": 2 00:09:10.816 } 00:09:10.816 ], 00:09:10.816 "driver_specific": { 00:09:10.816 "raid": { 00:09:10.816 "uuid": "eb6c93f0-48bb-11ef-a06c-59ddad71024c", 00:09:10.816 "strip_size_kb": 64, 00:09:10.816 "state": "online", 00:09:10.816 "raid_level": "raid0", 00:09:10.816 "superblock": false, 00:09:10.816 "num_base_bdevs": 2, 00:09:10.816 "num_base_bdevs_discovered": 2, 00:09:10.816 "num_base_bdevs_operational": 2, 00:09:10.816 "base_bdevs_list": [ 00:09:10.816 { 00:09:10.816 "name": "BaseBdev1", 00:09:10.816 "uuid": "e9f0e59a-48bb-11ef-a06c-59ddad71024c", 00:09:10.816 "is_configured": true, 00:09:10.816 "data_offset": 0, 00:09:10.816 "data_size": 65536 00:09:10.816 }, 00:09:10.816 { 00:09:10.816 "name": "BaseBdev2", 00:09:10.816 "uuid": "eb6c8c9c-48bb-11ef-a06c-59ddad71024c", 00:09:10.816 "is_configured": true, 00:09:10.816 "data_offset": 0, 00:09:10.816 "data_size": 65536 00:09:10.816 } 00:09:10.816 ] 00:09:10.816 } 00:09:10.816 } 00:09:10.816 }' 00:09:10.816 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.816 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:10.816 BaseBdev2' 00:09:10.816 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:10.816 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:10.816 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:11.074 "name": "BaseBdev1", 00:09:11.074 "aliases": [ 00:09:11.074 "e9f0e59a-48bb-11ef-a06c-59ddad71024c" 00:09:11.074 ], 00:09:11.074 "product_name": "Malloc disk", 00:09:11.074 "block_size": 512, 00:09:11.074 "num_blocks": 65536, 00:09:11.074 "uuid": "e9f0e59a-48bb-11ef-a06c-59ddad71024c", 00:09:11.074 "assigned_rate_limits": { 00:09:11.074 "rw_ios_per_sec": 0, 00:09:11.074 "rw_mbytes_per_sec": 0, 00:09:11.074 "r_mbytes_per_sec": 0, 00:09:11.074 "w_mbytes_per_sec": 0 00:09:11.074 }, 00:09:11.074 "claimed": true, 00:09:11.074 "claim_type": "exclusive_write", 00:09:11.074 "zoned": false, 00:09:11.074 "supported_io_types": { 00:09:11.074 "read": true, 00:09:11.074 "write": true, 00:09:11.074 "unmap": true, 00:09:11.074 "flush": true, 00:09:11.074 "reset": true, 00:09:11.074 "nvme_admin": false, 00:09:11.074 "nvme_io": false, 00:09:11.074 "nvme_io_md": false, 00:09:11.074 "write_zeroes": true, 00:09:11.074 "zcopy": true, 00:09:11.074 "get_zone_info": false, 00:09:11.074 "zone_management": false, 00:09:11.074 "zone_append": false, 00:09:11.074 "compare": false, 00:09:11.074 "compare_and_write": false, 00:09:11.074 "abort": true, 00:09:11.074 "seek_hole": false, 00:09:11.074 "seek_data": false, 00:09:11.074 "copy": true, 00:09:11.074 "nvme_iov_md": false 00:09:11.074 }, 00:09:11.074 "memory_domains": [ 00:09:11.074 { 00:09:11.074 "dma_device_id": "system", 00:09:11.074 "dma_device_type": 1 00:09:11.074 }, 00:09:11.074 { 00:09:11.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.074 "dma_device_type": 2 00:09:11.074 } 00:09:11.074 ], 00:09:11.074 "driver_specific": {} 00:09:11.074 }' 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:11.074 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:11.333 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:11.592 "name": "BaseBdev2", 00:09:11.592 "aliases": [ 00:09:11.592 "eb6c8c9c-48bb-11ef-a06c-59ddad71024c" 00:09:11.592 ], 00:09:11.592 "product_name": "Malloc disk", 00:09:11.592 "block_size": 512, 00:09:11.592 "num_blocks": 65536, 00:09:11.592 "uuid": "eb6c8c9c-48bb-11ef-a06c-59ddad71024c", 00:09:11.592 "assigned_rate_limits": { 00:09:11.592 "rw_ios_per_sec": 0, 00:09:11.592 "rw_mbytes_per_sec": 0, 00:09:11.592 "r_mbytes_per_sec": 0, 00:09:11.592 "w_mbytes_per_sec": 0 00:09:11.592 }, 00:09:11.592 "claimed": true, 00:09:11.592 "claim_type": "exclusive_write", 00:09:11.592 "zoned": false, 00:09:11.592 "supported_io_types": { 00:09:11.592 "read": true, 00:09:11.592 "write": true, 00:09:11.592 "unmap": true, 00:09:11.592 "flush": true, 00:09:11.592 "reset": true, 00:09:11.592 "nvme_admin": false, 00:09:11.592 "nvme_io": false, 00:09:11.592 "nvme_io_md": false, 00:09:11.592 "write_zeroes": true, 00:09:11.592 "zcopy": true, 00:09:11.592 "get_zone_info": false, 00:09:11.592 "zone_management": false, 00:09:11.592 "zone_append": false, 00:09:11.592 "compare": false, 00:09:11.592 "compare_and_write": false, 00:09:11.592 "abort": true, 00:09:11.592 "seek_hole": false, 00:09:11.592 "seek_data": false, 00:09:11.592 "copy": true, 00:09:11.592 "nvme_iov_md": false 00:09:11.592 }, 00:09:11.592 "memory_domains": [ 00:09:11.592 { 00:09:11.592 "dma_device_id": "system", 00:09:11.592 "dma_device_type": 1 00:09:11.592 }, 00:09:11.592 { 00:09:11.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.592 "dma_device_type": 2 00:09:11.592 } 00:09:11.592 ], 00:09:11.592 "driver_specific": {} 00:09:11.592 }' 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:11.592 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:11.864 [2024-07-23 06:22:24.145925] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.864 [2024-07-23 06:22:24.145961] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.864 [2024-07-23 06:22:24.145975] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.864 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.133 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:12.133 "name": "Existed_Raid", 00:09:12.133 "uuid": "eb6c93f0-48bb-11ef-a06c-59ddad71024c", 00:09:12.133 "strip_size_kb": 64, 00:09:12.133 "state": "offline", 00:09:12.133 "raid_level": "raid0", 00:09:12.133 "superblock": false, 00:09:12.133 "num_base_bdevs": 2, 00:09:12.133 "num_base_bdevs_discovered": 1, 00:09:12.133 "num_base_bdevs_operational": 1, 00:09:12.133 "base_bdevs_list": [ 00:09:12.133 { 00:09:12.133 "name": null, 00:09:12.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.133 "is_configured": false, 00:09:12.133 "data_offset": 0, 00:09:12.133 "data_size": 65536 00:09:12.133 }, 00:09:12.133 { 00:09:12.133 "name": "BaseBdev2", 00:09:12.133 "uuid": "eb6c8c9c-48bb-11ef-a06c-59ddad71024c", 00:09:12.133 "is_configured": true, 00:09:12.133 "data_offset": 0, 00:09:12.133 "data_size": 65536 00:09:12.133 } 00:09:12.133 ] 00:09:12.133 }' 00:09:12.133 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:12.133 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.391 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:12.391 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:12.391 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.391 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:12.650 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:12.650 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.650 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:12.908 [2024-07-23 06:22:25.219899] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.908 [2024-07-23 06:22:25.219932] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21426f634a00 name Existed_Raid, state offline 00:09:12.908 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:12.908 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:12.908 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.908 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48676 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48676 ']' 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48676 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48676 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:13.166 killing process with pid 48676 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48676' 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48676 00:09:13.166 [2024-07-23 06:22:25.484244] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.166 [2024-07-23 06:22:25.484279] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48676 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:13.166 00:09:13.166 real 0m8.998s 00:09:13.166 user 0m15.716s 00:09:13.166 sys 0m1.514s 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.166 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.166 ************************************ 00:09:13.166 END TEST raid_state_function_test 00:09:13.166 ************************************ 00:09:13.425 06:22:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:13.425 06:22:25 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:13.425 06:22:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:13.425 06:22:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.425 06:22:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.425 ************************************ 00:09:13.425 START TEST raid_state_function_test_sb 00:09:13.425 ************************************ 00:09:13.425 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:09:13.425 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48951 00:09:13.426 Process raid pid: 48951 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48951' 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48951 /var/tmp/spdk-raid.sock 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48951 ']' 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.426 06:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.426 [2024-07-23 06:22:25.728442] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:13.426 [2024-07-23 06:22:25.728654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:13.993 EAL: TSC is not safe to use in SMP mode 00:09:13.993 EAL: TSC is not invariant 00:09:13.993 [2024-07-23 06:22:26.266775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.993 [2024-07-23 06:22:26.345467] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:13.993 [2024-07-23 06:22:26.347638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.993 [2024-07-23 06:22:26.348420] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.993 [2024-07-23 06:22:26.348434] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.573 06:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.573 06:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:14.573 06:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:14.573 [2024-07-23 06:22:27.044043] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.573 [2024-07-23 06:22:27.044135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.573 [2024-07-23 06:22:27.044156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.573 [2024-07-23 06:22:27.044164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.573 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.833 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:14.833 "name": "Existed_Raid", 00:09:14.833 "uuid": "ee928b9a-48bb-11ef-a06c-59ddad71024c", 00:09:14.833 "strip_size_kb": 64, 00:09:14.833 "state": "configuring", 00:09:14.833 "raid_level": "raid0", 00:09:14.833 "superblock": true, 00:09:14.833 "num_base_bdevs": 2, 00:09:14.833 "num_base_bdevs_discovered": 0, 00:09:14.833 "num_base_bdevs_operational": 2, 00:09:14.833 "base_bdevs_list": [ 00:09:14.833 { 00:09:14.833 "name": "BaseBdev1", 00:09:14.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.833 "is_configured": false, 00:09:14.833 "data_offset": 0, 00:09:14.833 "data_size": 0 00:09:14.833 }, 00:09:14.833 { 00:09:14.833 "name": "BaseBdev2", 00:09:14.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.833 "is_configured": false, 00:09:14.833 "data_offset": 0, 00:09:14.833 "data_size": 0 00:09:14.833 } 00:09:14.833 ] 00:09:14.833 }' 00:09:14.833 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:14.833 06:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.399 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:15.399 [2024-07-23 06:22:27.848100] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.399 [2024-07-23 06:22:27.848129] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x398206a34500 name Existed_Raid, state configuring 00:09:15.399 06:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:15.658 [2024-07-23 06:22:28.120138] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.658 [2024-07-23 06:22:28.120185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.658 [2024-07-23 06:22:28.120190] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.658 [2024-07-23 06:22:28.120215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.658 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.916 [2024-07-23 06:22:28.353260] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.916 BaseBdev1 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:15.916 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:16.175 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.433 [ 00:09:16.433 { 00:09:16.433 "name": "BaseBdev1", 00:09:16.433 "aliases": [ 00:09:16.433 "ef5a26c6-48bb-11ef-a06c-59ddad71024c" 00:09:16.433 ], 00:09:16.433 "product_name": "Malloc disk", 00:09:16.433 "block_size": 512, 00:09:16.433 "num_blocks": 65536, 00:09:16.433 "uuid": "ef5a26c6-48bb-11ef-a06c-59ddad71024c", 00:09:16.433 "assigned_rate_limits": { 00:09:16.433 "rw_ios_per_sec": 0, 00:09:16.433 "rw_mbytes_per_sec": 0, 00:09:16.433 "r_mbytes_per_sec": 0, 00:09:16.433 "w_mbytes_per_sec": 0 00:09:16.433 }, 00:09:16.433 "claimed": true, 00:09:16.433 "claim_type": "exclusive_write", 00:09:16.433 "zoned": false, 00:09:16.433 "supported_io_types": { 00:09:16.433 "read": true, 00:09:16.433 "write": true, 00:09:16.433 "unmap": true, 00:09:16.433 "flush": true, 00:09:16.433 "reset": true, 00:09:16.433 "nvme_admin": false, 00:09:16.433 "nvme_io": false, 00:09:16.433 "nvme_io_md": false, 00:09:16.433 "write_zeroes": true, 00:09:16.433 "zcopy": true, 00:09:16.433 "get_zone_info": false, 00:09:16.433 "zone_management": false, 00:09:16.433 "zone_append": false, 00:09:16.433 "compare": false, 00:09:16.433 "compare_and_write": false, 00:09:16.433 "abort": true, 00:09:16.433 "seek_hole": false, 00:09:16.433 "seek_data": false, 00:09:16.433 "copy": true, 00:09:16.433 "nvme_iov_md": false 00:09:16.433 }, 00:09:16.433 "memory_domains": [ 00:09:16.433 { 00:09:16.433 "dma_device_id": "system", 00:09:16.433 "dma_device_type": 1 00:09:16.433 }, 00:09:16.433 { 00:09:16.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.433 "dma_device_type": 2 00:09:16.433 } 00:09:16.433 ], 00:09:16.433 "driver_specific": {} 00:09:16.433 } 00:09:16.433 ] 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.433 06:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.693 06:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:16.693 "name": "Existed_Raid", 00:09:16.693 "uuid": "ef36be9d-48bb-11ef-a06c-59ddad71024c", 00:09:16.693 "strip_size_kb": 64, 00:09:16.693 "state": "configuring", 00:09:16.693 "raid_level": "raid0", 00:09:16.693 "superblock": true, 00:09:16.693 "num_base_bdevs": 2, 00:09:16.693 "num_base_bdevs_discovered": 1, 00:09:16.693 "num_base_bdevs_operational": 2, 00:09:16.693 "base_bdevs_list": [ 00:09:16.693 { 00:09:16.693 "name": "BaseBdev1", 00:09:16.693 "uuid": "ef5a26c6-48bb-11ef-a06c-59ddad71024c", 00:09:16.693 "is_configured": true, 00:09:16.693 "data_offset": 2048, 00:09:16.693 "data_size": 63488 00:09:16.693 }, 00:09:16.693 { 00:09:16.693 "name": "BaseBdev2", 00:09:16.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.693 "is_configured": false, 00:09:16.693 "data_offset": 0, 00:09:16.693 "data_size": 0 00:09:16.693 } 00:09:16.693 ] 00:09:16.693 }' 00:09:16.693 06:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:16.693 06:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.963 06:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:17.222 [2024-07-23 06:22:29.668211] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.222 [2024-07-23 06:22:29.668243] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x398206a34500 name Existed_Raid, state configuring 00:09:17.222 06:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:17.480 [2024-07-23 06:22:29.992249] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.480 [2024-07-23 06:22:29.993074] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.480 [2024-07-23 06:22:29.993113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.739 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.997 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:17.997 "name": "Existed_Raid", 00:09:17.997 "uuid": "f05467bb-48bb-11ef-a06c-59ddad71024c", 00:09:17.997 "strip_size_kb": 64, 00:09:17.997 "state": "configuring", 00:09:17.997 "raid_level": "raid0", 00:09:17.997 "superblock": true, 00:09:17.997 "num_base_bdevs": 2, 00:09:17.997 "num_base_bdevs_discovered": 1, 00:09:17.997 "num_base_bdevs_operational": 2, 00:09:17.997 "base_bdevs_list": [ 00:09:17.997 { 00:09:17.997 "name": "BaseBdev1", 00:09:17.997 "uuid": "ef5a26c6-48bb-11ef-a06c-59ddad71024c", 00:09:17.997 "is_configured": true, 00:09:17.997 "data_offset": 2048, 00:09:17.997 "data_size": 63488 00:09:17.997 }, 00:09:17.997 { 00:09:17.997 "name": "BaseBdev2", 00:09:17.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.997 "is_configured": false, 00:09:17.997 "data_offset": 0, 00:09:17.997 "data_size": 0 00:09:17.997 } 00:09:17.997 ] 00:09:17.997 }' 00:09:17.997 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:17.997 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.256 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.517 [2024-07-23 06:22:30.892382] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.517 [2024-07-23 06:22:30.892472] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x398206a34a00 00:09:18.517 [2024-07-23 06:22:30.892479] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.517 [2024-07-23 06:22:30.892502] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x398206a97e20 00:09:18.517 [2024-07-23 06:22:30.892548] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x398206a34a00 00:09:18.517 [2024-07-23 06:22:30.892553] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x398206a34a00 00:09:18.517 [2024-07-23 06:22:30.892573] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.517 BaseBdev2 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:18.517 06:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.777 06:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.036 [ 00:09:19.036 { 00:09:19.036 "name": "BaseBdev2", 00:09:19.036 "aliases": [ 00:09:19.036 "f0ddbc8e-48bb-11ef-a06c-59ddad71024c" 00:09:19.036 ], 00:09:19.036 "product_name": "Malloc disk", 00:09:19.036 "block_size": 512, 00:09:19.036 "num_blocks": 65536, 00:09:19.036 "uuid": "f0ddbc8e-48bb-11ef-a06c-59ddad71024c", 00:09:19.036 "assigned_rate_limits": { 00:09:19.036 "rw_ios_per_sec": 0, 00:09:19.036 "rw_mbytes_per_sec": 0, 00:09:19.036 "r_mbytes_per_sec": 0, 00:09:19.036 "w_mbytes_per_sec": 0 00:09:19.036 }, 00:09:19.036 "claimed": true, 00:09:19.036 "claim_type": "exclusive_write", 00:09:19.036 "zoned": false, 00:09:19.036 "supported_io_types": { 00:09:19.036 "read": true, 00:09:19.036 "write": true, 00:09:19.036 "unmap": true, 00:09:19.036 "flush": true, 00:09:19.036 "reset": true, 00:09:19.036 "nvme_admin": false, 00:09:19.036 "nvme_io": false, 00:09:19.036 "nvme_io_md": false, 00:09:19.036 "write_zeroes": true, 00:09:19.036 "zcopy": true, 00:09:19.036 "get_zone_info": false, 00:09:19.036 "zone_management": false, 00:09:19.036 "zone_append": false, 00:09:19.036 "compare": false, 00:09:19.036 "compare_and_write": false, 00:09:19.036 "abort": true, 00:09:19.036 "seek_hole": false, 00:09:19.036 "seek_data": false, 00:09:19.036 "copy": true, 00:09:19.036 "nvme_iov_md": false 00:09:19.036 }, 00:09:19.036 "memory_domains": [ 00:09:19.036 { 00:09:19.036 "dma_device_id": "system", 00:09:19.036 "dma_device_type": 1 00:09:19.036 }, 00:09:19.036 { 00:09:19.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.036 "dma_device_type": 2 00:09:19.036 } 00:09:19.036 ], 00:09:19.036 "driver_specific": {} 00:09:19.036 } 00:09:19.036 ] 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.036 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.295 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.295 "name": "Existed_Raid", 00:09:19.295 "uuid": "f05467bb-48bb-11ef-a06c-59ddad71024c", 00:09:19.295 "strip_size_kb": 64, 00:09:19.295 "state": "online", 00:09:19.295 "raid_level": "raid0", 00:09:19.295 "superblock": true, 00:09:19.295 "num_base_bdevs": 2, 00:09:19.295 "num_base_bdevs_discovered": 2, 00:09:19.295 "num_base_bdevs_operational": 2, 00:09:19.295 "base_bdevs_list": [ 00:09:19.295 { 00:09:19.295 "name": "BaseBdev1", 00:09:19.295 "uuid": "ef5a26c6-48bb-11ef-a06c-59ddad71024c", 00:09:19.295 "is_configured": true, 00:09:19.295 "data_offset": 2048, 00:09:19.295 "data_size": 63488 00:09:19.295 }, 00:09:19.295 { 00:09:19.295 "name": "BaseBdev2", 00:09:19.295 "uuid": "f0ddbc8e-48bb-11ef-a06c-59ddad71024c", 00:09:19.295 "is_configured": true, 00:09:19.295 "data_offset": 2048, 00:09:19.295 "data_size": 63488 00:09:19.295 } 00:09:19.295 ] 00:09:19.295 }' 00:09:19.295 06:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.295 06:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:19.863 [2024-07-23 06:22:32.328330] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:19.863 "name": "Existed_Raid", 00:09:19.863 "aliases": [ 00:09:19.863 "f05467bb-48bb-11ef-a06c-59ddad71024c" 00:09:19.863 ], 00:09:19.863 "product_name": "Raid Volume", 00:09:19.863 "block_size": 512, 00:09:19.863 "num_blocks": 126976, 00:09:19.863 "uuid": "f05467bb-48bb-11ef-a06c-59ddad71024c", 00:09:19.863 "assigned_rate_limits": { 00:09:19.863 "rw_ios_per_sec": 0, 00:09:19.863 "rw_mbytes_per_sec": 0, 00:09:19.863 "r_mbytes_per_sec": 0, 00:09:19.863 "w_mbytes_per_sec": 0 00:09:19.863 }, 00:09:19.863 "claimed": false, 00:09:19.863 "zoned": false, 00:09:19.863 "supported_io_types": { 00:09:19.863 "read": true, 00:09:19.863 "write": true, 00:09:19.863 "unmap": true, 00:09:19.863 "flush": true, 00:09:19.863 "reset": true, 00:09:19.863 "nvme_admin": false, 00:09:19.863 "nvme_io": false, 00:09:19.863 "nvme_io_md": false, 00:09:19.863 "write_zeroes": true, 00:09:19.863 "zcopy": false, 00:09:19.863 "get_zone_info": false, 00:09:19.863 "zone_management": false, 00:09:19.863 "zone_append": false, 00:09:19.863 "compare": false, 00:09:19.863 "compare_and_write": false, 00:09:19.863 "abort": false, 00:09:19.863 "seek_hole": false, 00:09:19.863 "seek_data": false, 00:09:19.863 "copy": false, 00:09:19.863 "nvme_iov_md": false 00:09:19.863 }, 00:09:19.863 "memory_domains": [ 00:09:19.863 { 00:09:19.863 "dma_device_id": "system", 00:09:19.863 "dma_device_type": 1 00:09:19.863 }, 00:09:19.863 { 00:09:19.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.863 "dma_device_type": 2 00:09:19.863 }, 00:09:19.863 { 00:09:19.863 "dma_device_id": "system", 00:09:19.863 "dma_device_type": 1 00:09:19.863 }, 00:09:19.863 { 00:09:19.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.863 "dma_device_type": 2 00:09:19.863 } 00:09:19.863 ], 00:09:19.863 "driver_specific": { 00:09:19.863 "raid": { 00:09:19.863 "uuid": "f05467bb-48bb-11ef-a06c-59ddad71024c", 00:09:19.863 "strip_size_kb": 64, 00:09:19.863 "state": "online", 00:09:19.863 "raid_level": "raid0", 00:09:19.863 "superblock": true, 00:09:19.863 "num_base_bdevs": 2, 00:09:19.863 "num_base_bdevs_discovered": 2, 00:09:19.863 "num_base_bdevs_operational": 2, 00:09:19.863 "base_bdevs_list": [ 00:09:19.863 { 00:09:19.863 "name": "BaseBdev1", 00:09:19.863 "uuid": "ef5a26c6-48bb-11ef-a06c-59ddad71024c", 00:09:19.863 "is_configured": true, 00:09:19.863 "data_offset": 2048, 00:09:19.863 "data_size": 63488 00:09:19.863 }, 00:09:19.863 { 00:09:19.863 "name": "BaseBdev2", 00:09:19.863 "uuid": "f0ddbc8e-48bb-11ef-a06c-59ddad71024c", 00:09:19.863 "is_configured": true, 00:09:19.863 "data_offset": 2048, 00:09:19.863 "data_size": 63488 00:09:19.863 } 00:09:19.863 ] 00:09:19.863 } 00:09:19.863 } 00:09:19.863 }' 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:19.863 BaseBdev2' 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:19.863 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:20.122 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:20.122 "name": "BaseBdev1", 00:09:20.122 "aliases": [ 00:09:20.122 "ef5a26c6-48bb-11ef-a06c-59ddad71024c" 00:09:20.122 ], 00:09:20.122 "product_name": "Malloc disk", 00:09:20.122 "block_size": 512, 00:09:20.122 "num_blocks": 65536, 00:09:20.122 "uuid": "ef5a26c6-48bb-11ef-a06c-59ddad71024c", 00:09:20.122 "assigned_rate_limits": { 00:09:20.122 "rw_ios_per_sec": 0, 00:09:20.122 "rw_mbytes_per_sec": 0, 00:09:20.122 "r_mbytes_per_sec": 0, 00:09:20.122 "w_mbytes_per_sec": 0 00:09:20.122 }, 00:09:20.122 "claimed": true, 00:09:20.122 "claim_type": "exclusive_write", 00:09:20.122 "zoned": false, 00:09:20.122 "supported_io_types": { 00:09:20.122 "read": true, 00:09:20.122 "write": true, 00:09:20.122 "unmap": true, 00:09:20.122 "flush": true, 00:09:20.122 "reset": true, 00:09:20.122 "nvme_admin": false, 00:09:20.122 "nvme_io": false, 00:09:20.122 "nvme_io_md": false, 00:09:20.122 "write_zeroes": true, 00:09:20.122 "zcopy": true, 00:09:20.122 "get_zone_info": false, 00:09:20.122 "zone_management": false, 00:09:20.122 "zone_append": false, 00:09:20.122 "compare": false, 00:09:20.122 "compare_and_write": false, 00:09:20.122 "abort": true, 00:09:20.122 "seek_hole": false, 00:09:20.122 "seek_data": false, 00:09:20.122 "copy": true, 00:09:20.122 "nvme_iov_md": false 00:09:20.122 }, 00:09:20.122 "memory_domains": [ 00:09:20.122 { 00:09:20.122 "dma_device_id": "system", 00:09:20.122 "dma_device_type": 1 00:09:20.122 }, 00:09:20.122 { 00:09:20.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.122 "dma_device_type": 2 00:09:20.122 } 00:09:20.123 ], 00:09:20.123 "driver_specific": {} 00:09:20.123 }' 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.123 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:20.382 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:20.641 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:20.641 "name": "BaseBdev2", 00:09:20.641 "aliases": [ 00:09:20.641 "f0ddbc8e-48bb-11ef-a06c-59ddad71024c" 00:09:20.641 ], 00:09:20.641 "product_name": "Malloc disk", 00:09:20.641 "block_size": 512, 00:09:20.641 "num_blocks": 65536, 00:09:20.641 "uuid": "f0ddbc8e-48bb-11ef-a06c-59ddad71024c", 00:09:20.641 "assigned_rate_limits": { 00:09:20.641 "rw_ios_per_sec": 0, 00:09:20.641 "rw_mbytes_per_sec": 0, 00:09:20.641 "r_mbytes_per_sec": 0, 00:09:20.641 "w_mbytes_per_sec": 0 00:09:20.641 }, 00:09:20.641 "claimed": true, 00:09:20.641 "claim_type": "exclusive_write", 00:09:20.641 "zoned": false, 00:09:20.641 "supported_io_types": { 00:09:20.641 "read": true, 00:09:20.641 "write": true, 00:09:20.641 "unmap": true, 00:09:20.641 "flush": true, 00:09:20.641 "reset": true, 00:09:20.641 "nvme_admin": false, 00:09:20.641 "nvme_io": false, 00:09:20.641 "nvme_io_md": false, 00:09:20.641 "write_zeroes": true, 00:09:20.641 "zcopy": true, 00:09:20.641 "get_zone_info": false, 00:09:20.641 "zone_management": false, 00:09:20.641 "zone_append": false, 00:09:20.641 "compare": false, 00:09:20.641 "compare_and_write": false, 00:09:20.641 "abort": true, 00:09:20.641 "seek_hole": false, 00:09:20.641 "seek_data": false, 00:09:20.641 "copy": true, 00:09:20.641 "nvme_iov_md": false 00:09:20.641 }, 00:09:20.641 "memory_domains": [ 00:09:20.641 { 00:09:20.641 "dma_device_id": "system", 00:09:20.641 "dma_device_type": 1 00:09:20.641 }, 00:09:20.641 { 00:09:20.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.641 "dma_device_type": 2 00:09:20.641 } 00:09:20.641 ], 00:09:20.642 "driver_specific": {} 00:09:20.642 }' 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:20.642 06:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:20.901 [2024-07-23 06:22:33.220340] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.901 [2024-07-23 06:22:33.220368] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.901 [2024-07-23 06:22:33.220398] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.901 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.159 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:21.159 "name": "Existed_Raid", 00:09:21.159 "uuid": "f05467bb-48bb-11ef-a06c-59ddad71024c", 00:09:21.160 "strip_size_kb": 64, 00:09:21.160 "state": "offline", 00:09:21.160 "raid_level": "raid0", 00:09:21.160 "superblock": true, 00:09:21.160 "num_base_bdevs": 2, 00:09:21.160 "num_base_bdevs_discovered": 1, 00:09:21.160 "num_base_bdevs_operational": 1, 00:09:21.160 "base_bdevs_list": [ 00:09:21.160 { 00:09:21.160 "name": null, 00:09:21.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.160 "is_configured": false, 00:09:21.160 "data_offset": 2048, 00:09:21.160 "data_size": 63488 00:09:21.160 }, 00:09:21.160 { 00:09:21.160 "name": "BaseBdev2", 00:09:21.160 "uuid": "f0ddbc8e-48bb-11ef-a06c-59ddad71024c", 00:09:21.160 "is_configured": true, 00:09:21.160 "data_offset": 2048, 00:09:21.160 "data_size": 63488 00:09:21.160 } 00:09:21.160 ] 00:09:21.160 }' 00:09:21.160 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:21.160 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.419 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:21.419 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:21.419 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.419 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:21.677 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:21.677 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.677 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:21.936 [2024-07-23 06:22:34.318375] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.936 [2024-07-23 06:22:34.318407] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x398206a34a00 name Existed_Raid, state offline 00:09:21.936 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:21.936 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:21.936 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.936 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48951 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48951 ']' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48951 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48951 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:22.195 killing process with pid 48951 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48951' 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48951 00:09:22.195 [2024-07-23 06:22:34.630646] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.195 [2024-07-23 06:22:34.630681] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.195 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48951 00:09:22.454 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:22.454 ************************************ 00:09:22.454 00:09:22.454 real 0m9.098s 00:09:22.454 user 0m15.921s 00:09:22.454 sys 0m1.501s 00:09:22.454 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.454 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.454 END TEST raid_state_function_test_sb 00:09:22.454 ************************************ 00:09:22.454 06:22:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:22.454 06:22:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:22.454 06:22:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:22.454 06:22:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.454 06:22:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.454 ************************************ 00:09:22.454 START TEST raid_superblock_test 00:09:22.454 ************************************ 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49225 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49225 /var/tmp/spdk-raid.sock 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49225 ']' 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.454 06:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.454 [2024-07-23 06:22:34.869858] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:22.454 [2024-07-23 06:22:34.870100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:23.021 EAL: TSC is not safe to use in SMP mode 00:09:23.021 EAL: TSC is not invariant 00:09:23.021 [2024-07-23 06:22:35.418638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.021 [2024-07-23 06:22:35.507356] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:23.021 [2024-07-23 06:22:35.509525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.021 [2024-07-23 06:22:35.510294] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.021 [2024-07-23 06:22:35.510307] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.605 06:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:23.864 malloc1 00:09:23.864 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.123 [2024-07-23 06:22:36.382868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.123 [2024-07-23 06:22:36.382926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.123 [2024-07-23 06:22:36.382939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2990fbc34780 00:09:24.123 [2024-07-23 06:22:36.382948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.123 [2024-07-23 06:22:36.383867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.123 [2024-07-23 06:22:36.383893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.123 pt1 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.123 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:24.381 malloc2 00:09:24.381 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.640 [2024-07-23 06:22:36.962875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.640 [2024-07-23 06:22:36.962931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.640 [2024-07-23 06:22:36.962943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2990fbc34c80 00:09:24.640 [2024-07-23 06:22:36.962952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.640 [2024-07-23 06:22:36.963602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.640 [2024-07-23 06:22:36.963628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.640 pt2 00:09:24.640 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:24.640 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:24.640 06:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:24.899 [2024-07-23 06:22:37.218889] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.899 [2024-07-23 06:22:37.219460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.899 [2024-07-23 06:22:37.219523] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2990fbc34f00 00:09:24.899 [2024-07-23 06:22:37.219530] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.899 [2024-07-23 06:22:37.219579] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2990fbc97e20 00:09:24.899 [2024-07-23 06:22:37.219656] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2990fbc34f00 00:09:24.899 [2024-07-23 06:22:37.219661] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2990fbc34f00 00:09:24.899 [2024-07-23 06:22:37.219689] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.899 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.157 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:25.157 "name": "raid_bdev1", 00:09:25.157 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:25.157 "strip_size_kb": 64, 00:09:25.157 "state": "online", 00:09:25.157 "raid_level": "raid0", 00:09:25.157 "superblock": true, 00:09:25.157 "num_base_bdevs": 2, 00:09:25.157 "num_base_bdevs_discovered": 2, 00:09:25.157 "num_base_bdevs_operational": 2, 00:09:25.157 "base_bdevs_list": [ 00:09:25.157 { 00:09:25.157 "name": "pt1", 00:09:25.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.157 "is_configured": true, 00:09:25.157 "data_offset": 2048, 00:09:25.157 "data_size": 63488 00:09:25.157 }, 00:09:25.157 { 00:09:25.157 "name": "pt2", 00:09:25.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.157 "is_configured": true, 00:09:25.157 "data_offset": 2048, 00:09:25.157 "data_size": 63488 00:09:25.157 } 00:09:25.157 ] 00:09:25.157 }' 00:09:25.157 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:25.157 06:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:25.416 06:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:25.726 [2024-07-23 06:22:38.050987] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.726 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:25.726 "name": "raid_bdev1", 00:09:25.726 "aliases": [ 00:09:25.726 "f4a31a6c-48bb-11ef-a06c-59ddad71024c" 00:09:25.726 ], 00:09:25.726 "product_name": "Raid Volume", 00:09:25.726 "block_size": 512, 00:09:25.726 "num_blocks": 126976, 00:09:25.726 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:25.726 "assigned_rate_limits": { 00:09:25.726 "rw_ios_per_sec": 0, 00:09:25.726 "rw_mbytes_per_sec": 0, 00:09:25.726 "r_mbytes_per_sec": 0, 00:09:25.726 "w_mbytes_per_sec": 0 00:09:25.726 }, 00:09:25.726 "claimed": false, 00:09:25.726 "zoned": false, 00:09:25.726 "supported_io_types": { 00:09:25.726 "read": true, 00:09:25.726 "write": true, 00:09:25.726 "unmap": true, 00:09:25.726 "flush": true, 00:09:25.726 "reset": true, 00:09:25.726 "nvme_admin": false, 00:09:25.726 "nvme_io": false, 00:09:25.726 "nvme_io_md": false, 00:09:25.726 "write_zeroes": true, 00:09:25.726 "zcopy": false, 00:09:25.726 "get_zone_info": false, 00:09:25.726 "zone_management": false, 00:09:25.726 "zone_append": false, 00:09:25.726 "compare": false, 00:09:25.726 "compare_and_write": false, 00:09:25.726 "abort": false, 00:09:25.726 "seek_hole": false, 00:09:25.726 "seek_data": false, 00:09:25.726 "copy": false, 00:09:25.726 "nvme_iov_md": false 00:09:25.726 }, 00:09:25.726 "memory_domains": [ 00:09:25.726 { 00:09:25.726 "dma_device_id": "system", 00:09:25.726 "dma_device_type": 1 00:09:25.726 }, 00:09:25.726 { 00:09:25.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.726 "dma_device_type": 2 00:09:25.726 }, 00:09:25.726 { 00:09:25.726 "dma_device_id": "system", 00:09:25.726 "dma_device_type": 1 00:09:25.726 }, 00:09:25.726 { 00:09:25.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.726 "dma_device_type": 2 00:09:25.726 } 00:09:25.726 ], 00:09:25.726 "driver_specific": { 00:09:25.726 "raid": { 00:09:25.726 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:25.726 "strip_size_kb": 64, 00:09:25.726 "state": "online", 00:09:25.726 "raid_level": "raid0", 00:09:25.726 "superblock": true, 00:09:25.726 "num_base_bdevs": 2, 00:09:25.726 "num_base_bdevs_discovered": 2, 00:09:25.726 "num_base_bdevs_operational": 2, 00:09:25.726 "base_bdevs_list": [ 00:09:25.726 { 00:09:25.726 "name": "pt1", 00:09:25.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.726 "is_configured": true, 00:09:25.726 "data_offset": 2048, 00:09:25.726 "data_size": 63488 00:09:25.726 }, 00:09:25.726 { 00:09:25.726 "name": "pt2", 00:09:25.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.726 "is_configured": true, 00:09:25.726 "data_offset": 2048, 00:09:25.726 "data_size": 63488 00:09:25.726 } 00:09:25.726 ] 00:09:25.726 } 00:09:25.726 } 00:09:25.726 }' 00:09:25.726 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.726 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:25.726 pt2' 00:09:25.726 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:25.726 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:25.726 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:25.985 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:25.985 "name": "pt1", 00:09:25.985 "aliases": [ 00:09:25.985 "00000000-0000-0000-0000-000000000001" 00:09:25.985 ], 00:09:25.985 "product_name": "passthru", 00:09:25.985 "block_size": 512, 00:09:25.985 "num_blocks": 65536, 00:09:25.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.985 "assigned_rate_limits": { 00:09:25.985 "rw_ios_per_sec": 0, 00:09:25.985 "rw_mbytes_per_sec": 0, 00:09:25.985 "r_mbytes_per_sec": 0, 00:09:25.985 "w_mbytes_per_sec": 0 00:09:25.985 }, 00:09:25.985 "claimed": true, 00:09:25.985 "claim_type": "exclusive_write", 00:09:25.986 "zoned": false, 00:09:25.986 "supported_io_types": { 00:09:25.986 "read": true, 00:09:25.986 "write": true, 00:09:25.986 "unmap": true, 00:09:25.986 "flush": true, 00:09:25.986 "reset": true, 00:09:25.986 "nvme_admin": false, 00:09:25.986 "nvme_io": false, 00:09:25.986 "nvme_io_md": false, 00:09:25.986 "write_zeroes": true, 00:09:25.986 "zcopy": true, 00:09:25.986 "get_zone_info": false, 00:09:25.986 "zone_management": false, 00:09:25.986 "zone_append": false, 00:09:25.986 "compare": false, 00:09:25.986 "compare_and_write": false, 00:09:25.986 "abort": true, 00:09:25.986 "seek_hole": false, 00:09:25.986 "seek_data": false, 00:09:25.986 "copy": true, 00:09:25.986 "nvme_iov_md": false 00:09:25.986 }, 00:09:25.986 "memory_domains": [ 00:09:25.986 { 00:09:25.986 "dma_device_id": "system", 00:09:25.986 "dma_device_type": 1 00:09:25.986 }, 00:09:25.986 { 00:09:25.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.986 "dma_device_type": 2 00:09:25.986 } 00:09:25.986 ], 00:09:25.986 "driver_specific": { 00:09:25.986 "passthru": { 00:09:25.986 "name": "pt1", 00:09:25.986 "base_bdev_name": "malloc1" 00:09:25.986 } 00:09:25.986 } 00:09:25.986 }' 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:25.986 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:26.245 "name": "pt2", 00:09:26.245 "aliases": [ 00:09:26.245 "00000000-0000-0000-0000-000000000002" 00:09:26.245 ], 00:09:26.245 "product_name": "passthru", 00:09:26.245 "block_size": 512, 00:09:26.245 "num_blocks": 65536, 00:09:26.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.245 "assigned_rate_limits": { 00:09:26.245 "rw_ios_per_sec": 0, 00:09:26.245 "rw_mbytes_per_sec": 0, 00:09:26.245 "r_mbytes_per_sec": 0, 00:09:26.245 "w_mbytes_per_sec": 0 00:09:26.245 }, 00:09:26.245 "claimed": true, 00:09:26.245 "claim_type": "exclusive_write", 00:09:26.245 "zoned": false, 00:09:26.245 "supported_io_types": { 00:09:26.245 "read": true, 00:09:26.245 "write": true, 00:09:26.245 "unmap": true, 00:09:26.245 "flush": true, 00:09:26.245 "reset": true, 00:09:26.245 "nvme_admin": false, 00:09:26.245 "nvme_io": false, 00:09:26.245 "nvme_io_md": false, 00:09:26.245 "write_zeroes": true, 00:09:26.245 "zcopy": true, 00:09:26.245 "get_zone_info": false, 00:09:26.245 "zone_management": false, 00:09:26.245 "zone_append": false, 00:09:26.245 "compare": false, 00:09:26.245 "compare_and_write": false, 00:09:26.245 "abort": true, 00:09:26.245 "seek_hole": false, 00:09:26.245 "seek_data": false, 00:09:26.245 "copy": true, 00:09:26.245 "nvme_iov_md": false 00:09:26.245 }, 00:09:26.245 "memory_domains": [ 00:09:26.245 { 00:09:26.245 "dma_device_id": "system", 00:09:26.245 "dma_device_type": 1 00:09:26.245 }, 00:09:26.245 { 00:09:26.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.245 "dma_device_type": 2 00:09:26.245 } 00:09:26.245 ], 00:09:26.245 "driver_specific": { 00:09:26.245 "passthru": { 00:09:26.245 "name": "pt2", 00:09:26.245 "base_bdev_name": "malloc2" 00:09:26.245 } 00:09:26.245 } 00:09:26.245 }' 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:26.245 06:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:26.504 [2024-07-23 06:22:39.011017] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.762 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f4a31a6c-48bb-11ef-a06c-59ddad71024c 00:09:26.762 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f4a31a6c-48bb-11ef-a06c-59ddad71024c ']' 00:09:26.762 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:26.762 [2024-07-23 06:22:39.270996] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.762 [2024-07-23 06:22:39.271023] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.762 [2024-07-23 06:22:39.271063] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.762 [2024-07-23 06:22:39.271075] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.762 [2024-07-23 06:22:39.271079] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2990fbc34f00 name raid_bdev1, state offline 00:09:27.021 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.021 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:27.279 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:27.279 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:27.279 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.279 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:27.537 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.538 06:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:27.538 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:27.796 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:28.055 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:28.314 [2024-07-23 06:22:40.575063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:28.314 [2024-07-23 06:22:40.575627] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:28.314 [2024-07-23 06:22:40.575666] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:28.314 [2024-07-23 06:22:40.575717] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:28.314 [2024-07-23 06:22:40.575729] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.314 [2024-07-23 06:22:40.575733] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2990fbc34c80 name raid_bdev1, state configuring 00:09:28.314 request: 00:09:28.314 { 00:09:28.314 "name": "raid_bdev1", 00:09:28.314 "raid_level": "raid0", 00:09:28.314 "base_bdevs": [ 00:09:28.314 "malloc1", 00:09:28.314 "malloc2" 00:09:28.314 ], 00:09:28.314 "strip_size_kb": 64, 00:09:28.314 "superblock": false, 00:09:28.314 "method": "bdev_raid_create", 00:09:28.314 "req_id": 1 00:09:28.314 } 00:09:28.314 Got JSON-RPC error response 00:09:28.314 response: 00:09:28.314 { 00:09:28.314 "code": -17, 00:09:28.314 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:28.314 } 00:09:28.314 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:28.314 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:28.314 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:28.314 06:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:28.314 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:28.314 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.573 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:28.573 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:28.573 06:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:28.573 [2024-07-23 06:22:41.047057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:28.573 [2024-07-23 06:22:41.047111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.573 [2024-07-23 06:22:41.047141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2990fbc34780 00:09:28.573 [2024-07-23 06:22:41.047149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.573 [2024-07-23 06:22:41.047822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.573 [2024-07-23 06:22:41.047846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:28.573 [2024-07-23 06:22:41.047871] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:28.573 [2024-07-23 06:22:41.047883] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:28.573 pt1 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.573 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.833 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.833 "name": "raid_bdev1", 00:09:28.833 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:28.833 "strip_size_kb": 64, 00:09:28.833 "state": "configuring", 00:09:28.833 "raid_level": "raid0", 00:09:28.833 "superblock": true, 00:09:28.833 "num_base_bdevs": 2, 00:09:28.833 "num_base_bdevs_discovered": 1, 00:09:28.833 "num_base_bdevs_operational": 2, 00:09:28.833 "base_bdevs_list": [ 00:09:28.833 { 00:09:28.833 "name": "pt1", 00:09:28.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.833 "is_configured": true, 00:09:28.833 "data_offset": 2048, 00:09:28.833 "data_size": 63488 00:09:28.833 }, 00:09:28.833 { 00:09:28.833 "name": null, 00:09:28.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.833 "is_configured": false, 00:09:28.833 "data_offset": 2048, 00:09:28.833 "data_size": 63488 00:09:28.833 } 00:09:28.833 ] 00:09:28.833 }' 00:09:28.833 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.833 06:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:29.401 [2024-07-23 06:22:41.899081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:29.401 [2024-07-23 06:22:41.899148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.401 [2024-07-23 06:22:41.899176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2990fbc34f00 00:09:29.401 [2024-07-23 06:22:41.899183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.401 [2024-07-23 06:22:41.899313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.401 [2024-07-23 06:22:41.899324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:29.401 [2024-07-23 06:22:41.899347] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:29.401 [2024-07-23 06:22:41.899355] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:29.401 [2024-07-23 06:22:41.899381] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2990fbc35180 00:09:29.401 [2024-07-23 06:22:41.899385] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.401 [2024-07-23 06:22:41.899405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2990fbc97e20 00:09:29.401 [2024-07-23 06:22:41.899458] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2990fbc35180 00:09:29.401 [2024-07-23 06:22:41.899462] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2990fbc35180 00:09:29.401 [2024-07-23 06:22:41.899483] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.401 pt2 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:29.401 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.660 06:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.918 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:29.919 "name": "raid_bdev1", 00:09:29.919 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:29.919 "strip_size_kb": 64, 00:09:29.919 "state": "online", 00:09:29.919 "raid_level": "raid0", 00:09:29.919 "superblock": true, 00:09:29.919 "num_base_bdevs": 2, 00:09:29.919 "num_base_bdevs_discovered": 2, 00:09:29.919 "num_base_bdevs_operational": 2, 00:09:29.919 "base_bdevs_list": [ 00:09:29.919 { 00:09:29.919 "name": "pt1", 00:09:29.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.919 "is_configured": true, 00:09:29.919 "data_offset": 2048, 00:09:29.919 "data_size": 63488 00:09:29.919 }, 00:09:29.919 { 00:09:29.919 "name": "pt2", 00:09:29.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.919 "is_configured": true, 00:09:29.919 "data_offset": 2048, 00:09:29.919 "data_size": 63488 00:09:29.919 } 00:09:29.919 ] 00:09:29.919 }' 00:09:29.919 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:29.919 06:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:30.178 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:30.438 [2024-07-23 06:22:42.819256] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.438 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:30.438 "name": "raid_bdev1", 00:09:30.438 "aliases": [ 00:09:30.438 "f4a31a6c-48bb-11ef-a06c-59ddad71024c" 00:09:30.438 ], 00:09:30.438 "product_name": "Raid Volume", 00:09:30.438 "block_size": 512, 00:09:30.438 "num_blocks": 126976, 00:09:30.438 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:30.438 "assigned_rate_limits": { 00:09:30.438 "rw_ios_per_sec": 0, 00:09:30.438 "rw_mbytes_per_sec": 0, 00:09:30.438 "r_mbytes_per_sec": 0, 00:09:30.438 "w_mbytes_per_sec": 0 00:09:30.438 }, 00:09:30.438 "claimed": false, 00:09:30.438 "zoned": false, 00:09:30.438 "supported_io_types": { 00:09:30.438 "read": true, 00:09:30.438 "write": true, 00:09:30.438 "unmap": true, 00:09:30.438 "flush": true, 00:09:30.438 "reset": true, 00:09:30.438 "nvme_admin": false, 00:09:30.438 "nvme_io": false, 00:09:30.438 "nvme_io_md": false, 00:09:30.438 "write_zeroes": true, 00:09:30.438 "zcopy": false, 00:09:30.438 "get_zone_info": false, 00:09:30.438 "zone_management": false, 00:09:30.438 "zone_append": false, 00:09:30.438 "compare": false, 00:09:30.438 "compare_and_write": false, 00:09:30.438 "abort": false, 00:09:30.438 "seek_hole": false, 00:09:30.438 "seek_data": false, 00:09:30.438 "copy": false, 00:09:30.438 "nvme_iov_md": false 00:09:30.438 }, 00:09:30.438 "memory_domains": [ 00:09:30.438 { 00:09:30.438 "dma_device_id": "system", 00:09:30.438 "dma_device_type": 1 00:09:30.438 }, 00:09:30.438 { 00:09:30.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.439 "dma_device_type": 2 00:09:30.439 }, 00:09:30.439 { 00:09:30.439 "dma_device_id": "system", 00:09:30.439 "dma_device_type": 1 00:09:30.439 }, 00:09:30.439 { 00:09:30.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.439 "dma_device_type": 2 00:09:30.439 } 00:09:30.439 ], 00:09:30.439 "driver_specific": { 00:09:30.439 "raid": { 00:09:30.439 "uuid": "f4a31a6c-48bb-11ef-a06c-59ddad71024c", 00:09:30.439 "strip_size_kb": 64, 00:09:30.439 "state": "online", 00:09:30.439 "raid_level": "raid0", 00:09:30.439 "superblock": true, 00:09:30.439 "num_base_bdevs": 2, 00:09:30.439 "num_base_bdevs_discovered": 2, 00:09:30.439 "num_base_bdevs_operational": 2, 00:09:30.439 "base_bdevs_list": [ 00:09:30.439 { 00:09:30.439 "name": "pt1", 00:09:30.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.439 "is_configured": true, 00:09:30.439 "data_offset": 2048, 00:09:30.439 "data_size": 63488 00:09:30.439 }, 00:09:30.439 { 00:09:30.439 "name": "pt2", 00:09:30.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.439 "is_configured": true, 00:09:30.439 "data_offset": 2048, 00:09:30.439 "data_size": 63488 00:09:30.439 } 00:09:30.439 ] 00:09:30.439 } 00:09:30.439 } 00:09:30.439 }' 00:09:30.439 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.439 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:30.439 pt2' 00:09:30.439 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:30.439 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:30.439 06:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:30.704 "name": "pt1", 00:09:30.704 "aliases": [ 00:09:30.704 "00000000-0000-0000-0000-000000000001" 00:09:30.704 ], 00:09:30.704 "product_name": "passthru", 00:09:30.704 "block_size": 512, 00:09:30.704 "num_blocks": 65536, 00:09:30.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.704 "assigned_rate_limits": { 00:09:30.704 "rw_ios_per_sec": 0, 00:09:30.704 "rw_mbytes_per_sec": 0, 00:09:30.704 "r_mbytes_per_sec": 0, 00:09:30.704 "w_mbytes_per_sec": 0 00:09:30.704 }, 00:09:30.704 "claimed": true, 00:09:30.704 "claim_type": "exclusive_write", 00:09:30.704 "zoned": false, 00:09:30.704 "supported_io_types": { 00:09:30.704 "read": true, 00:09:30.704 "write": true, 00:09:30.704 "unmap": true, 00:09:30.704 "flush": true, 00:09:30.704 "reset": true, 00:09:30.704 "nvme_admin": false, 00:09:30.704 "nvme_io": false, 00:09:30.704 "nvme_io_md": false, 00:09:30.704 "write_zeroes": true, 00:09:30.704 "zcopy": true, 00:09:30.704 "get_zone_info": false, 00:09:30.704 "zone_management": false, 00:09:30.704 "zone_append": false, 00:09:30.704 "compare": false, 00:09:30.704 "compare_and_write": false, 00:09:30.704 "abort": true, 00:09:30.704 "seek_hole": false, 00:09:30.704 "seek_data": false, 00:09:30.704 "copy": true, 00:09:30.704 "nvme_iov_md": false 00:09:30.704 }, 00:09:30.704 "memory_domains": [ 00:09:30.704 { 00:09:30.704 "dma_device_id": "system", 00:09:30.704 "dma_device_type": 1 00:09:30.704 }, 00:09:30.704 { 00:09:30.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.704 "dma_device_type": 2 00:09:30.704 } 00:09:30.704 ], 00:09:30.704 "driver_specific": { 00:09:30.704 "passthru": { 00:09:30.704 "name": "pt1", 00:09:30.704 "base_bdev_name": "malloc1" 00:09:30.704 } 00:09:30.704 } 00:09:30.704 }' 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:30.704 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:30.962 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:30.962 "name": "pt2", 00:09:30.962 "aliases": [ 00:09:30.962 "00000000-0000-0000-0000-000000000002" 00:09:30.962 ], 00:09:30.962 "product_name": "passthru", 00:09:30.962 "block_size": 512, 00:09:30.962 "num_blocks": 65536, 00:09:30.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.962 "assigned_rate_limits": { 00:09:30.962 "rw_ios_per_sec": 0, 00:09:30.962 "rw_mbytes_per_sec": 0, 00:09:30.962 "r_mbytes_per_sec": 0, 00:09:30.962 "w_mbytes_per_sec": 0 00:09:30.962 }, 00:09:30.962 "claimed": true, 00:09:30.962 "claim_type": "exclusive_write", 00:09:30.962 "zoned": false, 00:09:30.962 "supported_io_types": { 00:09:30.962 "read": true, 00:09:30.962 "write": true, 00:09:30.962 "unmap": true, 00:09:30.962 "flush": true, 00:09:30.962 "reset": true, 00:09:30.962 "nvme_admin": false, 00:09:30.962 "nvme_io": false, 00:09:30.962 "nvme_io_md": false, 00:09:30.962 "write_zeroes": true, 00:09:30.962 "zcopy": true, 00:09:30.962 "get_zone_info": false, 00:09:30.962 "zone_management": false, 00:09:30.962 "zone_append": false, 00:09:30.962 "compare": false, 00:09:30.962 "compare_and_write": false, 00:09:30.962 "abort": true, 00:09:30.962 "seek_hole": false, 00:09:30.962 "seek_data": false, 00:09:30.962 "copy": true, 00:09:30.962 "nvme_iov_md": false 00:09:30.962 }, 00:09:30.962 "memory_domains": [ 00:09:30.962 { 00:09:30.962 "dma_device_id": "system", 00:09:30.962 "dma_device_type": 1 00:09:30.962 }, 00:09:30.962 { 00:09:30.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.962 "dma_device_type": 2 00:09:30.962 } 00:09:30.962 ], 00:09:30.962 "driver_specific": { 00:09:30.962 "passthru": { 00:09:30.962 "name": "pt2", 00:09:30.962 "base_bdev_name": "malloc2" 00:09:30.962 } 00:09:30.962 } 00:09:30.962 }' 00:09:30.962 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:31.221 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:31.221 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:31.221 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:31.221 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:31.221 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:31.222 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:31.481 [2024-07-23 06:22:43.799301] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f4a31a6c-48bb-11ef-a06c-59ddad71024c '!=' f4a31a6c-48bb-11ef-a06c-59ddad71024c ']' 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49225 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49225 ']' 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49225 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49225 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:31.481 killing process with pid 49225 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49225' 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49225 00:09:31.481 [2024-07-23 06:22:43.831287] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.481 [2024-07-23 06:22:43.831323] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.481 [2024-07-23 06:22:43.831336] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.481 [2024-07-23 06:22:43.831340] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2990fbc35180 name raid_bdev1, state offline 00:09:31.481 06:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49225 00:09:31.481 [2024-07-23 06:22:43.843011] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.740 06:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:31.740 00:09:31.740 real 0m9.162s 00:09:31.740 user 0m15.954s 00:09:31.740 sys 0m1.619s 00:09:31.740 06:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.740 06:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.740 ************************************ 00:09:31.740 END TEST raid_superblock_test 00:09:31.740 ************************************ 00:09:31.740 06:22:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:31.740 06:22:44 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:31.740 06:22:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:31.740 06:22:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.740 06:22:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.740 ************************************ 00:09:31.740 START TEST raid_read_error_test 00:09:31.740 ************************************ 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.gN8FOzZCEp 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49490 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49490 /var/tmp/spdk-raid.sock 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49490 ']' 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.740 06:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.740 [2024-07-23 06:22:44.088266] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:31.740 [2024-07-23 06:22:44.088540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:32.307 EAL: TSC is not safe to use in SMP mode 00:09:32.307 EAL: TSC is not invariant 00:09:32.307 [2024-07-23 06:22:44.662032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.307 [2024-07-23 06:22:44.746428] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:32.307 [2024-07-23 06:22:44.748771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.307 [2024-07-23 06:22:44.749622] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.307 [2024-07-23 06:22:44.749637] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.874 06:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.874 06:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:32.874 06:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:32.874 06:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:33.133 BaseBdev1_malloc 00:09:33.133 06:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:33.133 true 00:09:33.133 06:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.391 [2024-07-23 06:22:45.853893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.391 [2024-07-23 06:22:45.853957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.391 [2024-07-23 06:22:45.853984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21796e234780 00:09:33.391 [2024-07-23 06:22:45.853993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.391 [2024-07-23 06:22:45.854677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.391 [2024-07-23 06:22:45.854709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.391 BaseBdev1 00:09:33.391 06:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:33.391 06:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.650 BaseBdev2_malloc 00:09:33.650 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:33.910 true 00:09:33.910 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.169 [2024-07-23 06:22:46.666042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.169 [2024-07-23 06:22:46.666100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.169 [2024-07-23 06:22:46.666125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21796e234c80 00:09:34.169 [2024-07-23 06:22:46.666134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.169 [2024-07-23 06:22:46.666818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.169 [2024-07-23 06:22:46.666848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.169 BaseBdev2 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:34.427 [2024-07-23 06:22:46.910087] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.427 [2024-07-23 06:22:46.910711] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.427 [2024-07-23 06:22:46.910803] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x21796e234f00 00:09:34.427 [2024-07-23 06:22:46.910810] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:34.427 [2024-07-23 06:22:46.910844] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21796e2a0e20 00:09:34.427 [2024-07-23 06:22:46.910931] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21796e234f00 00:09:34.427 [2024-07-23 06:22:46.910941] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21796e234f00 00:09:34.427 [2024-07-23 06:22:46.910973] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.427 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.428 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.428 06:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.049 06:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:35.049 "name": "raid_bdev1", 00:09:35.049 "uuid": "fa69dce5-48bb-11ef-a06c-59ddad71024c", 00:09:35.049 "strip_size_kb": 64, 00:09:35.049 "state": "online", 00:09:35.049 "raid_level": "raid0", 00:09:35.049 "superblock": true, 00:09:35.049 "num_base_bdevs": 2, 00:09:35.049 "num_base_bdevs_discovered": 2, 00:09:35.049 "num_base_bdevs_operational": 2, 00:09:35.049 "base_bdevs_list": [ 00:09:35.049 { 00:09:35.049 "name": "BaseBdev1", 00:09:35.049 "uuid": "1038e278-1406-e253-8b4b-bfd269ea2cba", 00:09:35.049 "is_configured": true, 00:09:35.049 "data_offset": 2048, 00:09:35.049 "data_size": 63488 00:09:35.049 }, 00:09:35.049 { 00:09:35.049 "name": "BaseBdev2", 00:09:35.049 "uuid": "d1c338ed-4a3e-475c-a932-f65f28bdc391", 00:09:35.050 "is_configured": true, 00:09:35.050 "data_offset": 2048, 00:09:35.050 "data_size": 63488 00:09:35.050 } 00:09:35.050 ] 00:09:35.050 }' 00:09:35.050 06:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:35.050 06:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.050 06:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:35.050 06:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:35.310 [2024-07-23 06:22:47.634389] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21796e2a0ec0 00:09:36.247 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.506 06:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.765 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.765 "name": "raid_bdev1", 00:09:36.765 "uuid": "fa69dce5-48bb-11ef-a06c-59ddad71024c", 00:09:36.765 "strip_size_kb": 64, 00:09:36.765 "state": "online", 00:09:36.765 "raid_level": "raid0", 00:09:36.765 "superblock": true, 00:09:36.765 "num_base_bdevs": 2, 00:09:36.765 "num_base_bdevs_discovered": 2, 00:09:36.765 "num_base_bdevs_operational": 2, 00:09:36.765 "base_bdevs_list": [ 00:09:36.765 { 00:09:36.765 "name": "BaseBdev1", 00:09:36.765 "uuid": "1038e278-1406-e253-8b4b-bfd269ea2cba", 00:09:36.765 "is_configured": true, 00:09:36.765 "data_offset": 2048, 00:09:36.765 "data_size": 63488 00:09:36.765 }, 00:09:36.765 { 00:09:36.765 "name": "BaseBdev2", 00:09:36.765 "uuid": "d1c338ed-4a3e-475c-a932-f65f28bdc391", 00:09:36.765 "is_configured": true, 00:09:36.765 "data_offset": 2048, 00:09:36.765 "data_size": 63488 00:09:36.765 } 00:09:36.765 ] 00:09:36.765 }' 00:09:36.765 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.765 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.023 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:37.282 [2024-07-23 06:22:49.735913] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.282 [2024-07-23 06:22:49.735941] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.282 [2024-07-23 06:22:49.736279] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.282 [2024-07-23 06:22:49.736305] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.282 [2024-07-23 06:22:49.736315] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.282 [2024-07-23 06:22:49.736321] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21796e234f00 name raid_bdev1, state offline 00:09:37.282 0 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49490 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49490 ']' 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49490 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49490 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:37.282 killing process with pid 49490 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49490' 00:09:37.282 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49490 00:09:37.282 [2024-07-23 06:22:49.762003] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.283 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49490 00:09:37.283 [2024-07-23 06:22:49.774477] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.gN8FOzZCEp 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:09:37.542 00:09:37.542 real 0m5.891s 00:09:37.542 user 0m9.003s 00:09:37.542 sys 0m1.055s 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.542 06:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.542 ************************************ 00:09:37.542 END TEST raid_read_error_test 00:09:37.542 ************************************ 00:09:37.542 06:22:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:37.542 06:22:49 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:37.542 06:22:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:37.542 06:22:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.542 06:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.542 ************************************ 00:09:37.542 START TEST raid_write_error_test 00:09:37.542 ************************************ 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:37.542 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vrEEYrEIIn 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49618 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49618 /var/tmp/spdk-raid.sock 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49618 ']' 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.543 06:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.543 [2024-07-23 06:22:50.023320] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:37.543 [2024-07-23 06:22:50.023585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:38.111 EAL: TSC is not safe to use in SMP mode 00:09:38.111 EAL: TSC is not invariant 00:09:38.111 [2024-07-23 06:22:50.584838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.370 [2024-07-23 06:22:50.673508] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:38.370 [2024-07-23 06:22:50.675698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.370 [2024-07-23 06:22:50.676496] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.370 [2024-07-23 06:22:50.676509] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.628 06:22:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.628 06:22:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:38.628 06:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:38.628 06:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.930 BaseBdev1_malloc 00:09:38.930 06:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:39.188 true 00:09:39.188 06:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:39.447 [2024-07-23 06:22:51.861277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:39.447 [2024-07-23 06:22:51.861343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.447 [2024-07-23 06:22:51.861371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11053c034780 00:09:39.447 [2024-07-23 06:22:51.861380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.447 [2024-07-23 06:22:51.862063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.447 [2024-07-23 06:22:51.862085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:39.447 BaseBdev1 00:09:39.447 06:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:39.447 06:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:39.705 BaseBdev2_malloc 00:09:39.705 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:39.964 true 00:09:39.964 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.222 [2024-07-23 06:22:52.657357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.222 [2024-07-23 06:22:52.657405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.222 [2024-07-23 06:22:52.657432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11053c034c80 00:09:40.222 [2024-07-23 06:22:52.657441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.222 [2024-07-23 06:22:52.658134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.223 [2024-07-23 06:22:52.658161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.223 BaseBdev2 00:09:40.223 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:40.482 [2024-07-23 06:22:52.901403] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.482 [2024-07-23 06:22:52.902094] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.482 [2024-07-23 06:22:52.902157] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11053c034f00 00:09:40.482 [2024-07-23 06:22:52.902163] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:40.482 [2024-07-23 06:22:52.902196] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11053c0a0e20 00:09:40.482 [2024-07-23 06:22:52.902295] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11053c034f00 00:09:40.482 [2024-07-23 06:22:52.902300] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11053c034f00 00:09:40.482 [2024-07-23 06:22:52.902327] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.482 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:40.482 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:40.482 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:40.482 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.482 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.483 06:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.741 06:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.741 "name": "raid_bdev1", 00:09:40.741 "uuid": "fdfc10ac-48bb-11ef-a06c-59ddad71024c", 00:09:40.741 "strip_size_kb": 64, 00:09:40.741 "state": "online", 00:09:40.741 "raid_level": "raid0", 00:09:40.741 "superblock": true, 00:09:40.741 "num_base_bdevs": 2, 00:09:40.741 "num_base_bdevs_discovered": 2, 00:09:40.741 "num_base_bdevs_operational": 2, 00:09:40.741 "base_bdevs_list": [ 00:09:40.741 { 00:09:40.741 "name": "BaseBdev1", 00:09:40.741 "uuid": "a161f6fa-b9b0-395b-ba84-7182673a9ca3", 00:09:40.741 "is_configured": true, 00:09:40.741 "data_offset": 2048, 00:09:40.741 "data_size": 63488 00:09:40.741 }, 00:09:40.741 { 00:09:40.741 "name": "BaseBdev2", 00:09:40.741 "uuid": "07eb7dd9-21ef-7659-9747-f22baa07ff44", 00:09:40.741 "is_configured": true, 00:09:40.741 "data_offset": 2048, 00:09:40.741 "data_size": 63488 00:09:40.741 } 00:09:40.741 ] 00:09:40.741 }' 00:09:40.741 06:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.741 06:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.002 06:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:41.002 06:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:41.262 [2024-07-23 06:22:53.601636] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11053c0a0ec0 00:09:42.197 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.455 06:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.714 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.714 "name": "raid_bdev1", 00:09:42.714 "uuid": "fdfc10ac-48bb-11ef-a06c-59ddad71024c", 00:09:42.714 "strip_size_kb": 64, 00:09:42.714 "state": "online", 00:09:42.714 "raid_level": "raid0", 00:09:42.714 "superblock": true, 00:09:42.714 "num_base_bdevs": 2, 00:09:42.714 "num_base_bdevs_discovered": 2, 00:09:42.714 "num_base_bdevs_operational": 2, 00:09:42.714 "base_bdevs_list": [ 00:09:42.714 { 00:09:42.714 "name": "BaseBdev1", 00:09:42.714 "uuid": "a161f6fa-b9b0-395b-ba84-7182673a9ca3", 00:09:42.714 "is_configured": true, 00:09:42.714 "data_offset": 2048, 00:09:42.714 "data_size": 63488 00:09:42.714 }, 00:09:42.714 { 00:09:42.714 "name": "BaseBdev2", 00:09:42.714 "uuid": "07eb7dd9-21ef-7659-9747-f22baa07ff44", 00:09:42.714 "is_configured": true, 00:09:42.714 "data_offset": 2048, 00:09:42.714 "data_size": 63488 00:09:42.714 } 00:09:42.714 ] 00:09:42.714 }' 00:09:42.714 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.714 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.982 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:43.240 [2024-07-23 06:22:55.667646] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.240 [2024-07-23 06:22:55.667680] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.240 [2024-07-23 06:22:55.668050] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.240 [2024-07-23 06:22:55.668068] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.240 [2024-07-23 06:22:55.668075] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.240 [2024-07-23 06:22:55.668079] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11053c034f00 name raid_bdev1, state offline 00:09:43.240 0 00:09:43.240 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49618 00:09:43.240 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49618 ']' 00:09:43.240 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49618 00:09:43.240 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:09:43.240 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:43.240 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49618 00:09:43.241 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:43.241 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:43.241 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:43.241 killing process with pid 49618 00:09:43.241 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49618' 00:09:43.241 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49618 00:09:43.241 [2024-07-23 06:22:55.694811] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.241 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49618 00:09:43.241 [2024-07-23 06:22:55.706654] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vrEEYrEIIn 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:09:43.499 00:09:43.499 real 0m5.891s 00:09:43.499 user 0m9.057s 00:09:43.499 sys 0m0.988s 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.499 06:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.500 ************************************ 00:09:43.500 END TEST raid_write_error_test 00:09:43.500 ************************************ 00:09:43.500 06:22:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:43.500 06:22:55 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:43.500 06:22:55 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:43.500 06:22:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:43.500 06:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.500 06:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.500 ************************************ 00:09:43.500 START TEST raid_state_function_test 00:09:43.500 ************************************ 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49744 00:09:43.500 Process raid pid: 49744 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49744' 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49744 /var/tmp/spdk-raid.sock 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49744 ']' 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.500 06:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.500 [2024-07-23 06:22:55.956151] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:43.500 [2024-07-23 06:22:55.956422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:44.067 EAL: TSC is not safe to use in SMP mode 00:09:44.067 EAL: TSC is not invariant 00:09:44.067 [2024-07-23 06:22:56.519144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.326 [2024-07-23 06:22:56.619891] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:44.326 [2024-07-23 06:22:56.622401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.326 [2024-07-23 06:22:56.623301] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.326 [2024-07-23 06:22:56.623318] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.584 06:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.584 06:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:44.584 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:44.842 [2024-07-23 06:22:57.232629] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.842 [2024-07-23 06:22:57.232697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.842 [2024-07-23 06:22:57.232703] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.842 [2024-07-23 06:22:57.232711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.842 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.842 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.842 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.843 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.101 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:45.101 "name": "Existed_Raid", 00:09:45.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.101 "strip_size_kb": 64, 00:09:45.101 "state": "configuring", 00:09:45.101 "raid_level": "concat", 00:09:45.101 "superblock": false, 00:09:45.101 "num_base_bdevs": 2, 00:09:45.101 "num_base_bdevs_discovered": 0, 00:09:45.101 "num_base_bdevs_operational": 2, 00:09:45.101 "base_bdevs_list": [ 00:09:45.101 { 00:09:45.101 "name": "BaseBdev1", 00:09:45.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.101 "is_configured": false, 00:09:45.101 "data_offset": 0, 00:09:45.101 "data_size": 0 00:09:45.101 }, 00:09:45.101 { 00:09:45.101 "name": "BaseBdev2", 00:09:45.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.101 "is_configured": false, 00:09:45.101 "data_offset": 0, 00:09:45.101 "data_size": 0 00:09:45.101 } 00:09:45.101 ] 00:09:45.101 }' 00:09:45.101 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:45.101 06:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.359 06:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:45.618 [2024-07-23 06:22:58.092693] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.618 [2024-07-23 06:22:58.092724] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1350a2c34500 name Existed_Raid, state configuring 00:09:45.618 06:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:45.876 [2024-07-23 06:22:58.332738] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.876 [2024-07-23 06:22:58.332811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.876 [2024-07-23 06:22:58.332817] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.876 [2024-07-23 06:22:58.332826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.876 06:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.134 [2024-07-23 06:22:58.569848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.134 BaseBdev1 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:46.134 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:46.391 06:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.649 [ 00:09:46.649 { 00:09:46.649 "name": "BaseBdev1", 00:09:46.649 "aliases": [ 00:09:46.649 "015cd83f-48bc-11ef-a06c-59ddad71024c" 00:09:46.649 ], 00:09:46.649 "product_name": "Malloc disk", 00:09:46.649 "block_size": 512, 00:09:46.649 "num_blocks": 65536, 00:09:46.649 "uuid": "015cd83f-48bc-11ef-a06c-59ddad71024c", 00:09:46.649 "assigned_rate_limits": { 00:09:46.649 "rw_ios_per_sec": 0, 00:09:46.649 "rw_mbytes_per_sec": 0, 00:09:46.649 "r_mbytes_per_sec": 0, 00:09:46.649 "w_mbytes_per_sec": 0 00:09:46.649 }, 00:09:46.649 "claimed": true, 00:09:46.649 "claim_type": "exclusive_write", 00:09:46.649 "zoned": false, 00:09:46.649 "supported_io_types": { 00:09:46.649 "read": true, 00:09:46.649 "write": true, 00:09:46.649 "unmap": true, 00:09:46.650 "flush": true, 00:09:46.650 "reset": true, 00:09:46.650 "nvme_admin": false, 00:09:46.650 "nvme_io": false, 00:09:46.650 "nvme_io_md": false, 00:09:46.650 "write_zeroes": true, 00:09:46.650 "zcopy": true, 00:09:46.650 "get_zone_info": false, 00:09:46.650 "zone_management": false, 00:09:46.650 "zone_append": false, 00:09:46.650 "compare": false, 00:09:46.650 "compare_and_write": false, 00:09:46.650 "abort": true, 00:09:46.650 "seek_hole": false, 00:09:46.650 "seek_data": false, 00:09:46.650 "copy": true, 00:09:46.650 "nvme_iov_md": false 00:09:46.650 }, 00:09:46.650 "memory_domains": [ 00:09:46.650 { 00:09:46.650 "dma_device_id": "system", 00:09:46.650 "dma_device_type": 1 00:09:46.650 }, 00:09:46.650 { 00:09:46.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.650 "dma_device_type": 2 00:09:46.650 } 00:09:46.650 ], 00:09:46.650 "driver_specific": {} 00:09:46.650 } 00:09:46.650 ] 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.650 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.942 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.942 "name": "Existed_Raid", 00:09:46.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.942 "strip_size_kb": 64, 00:09:46.942 "state": "configuring", 00:09:46.942 "raid_level": "concat", 00:09:46.942 "superblock": false, 00:09:46.942 "num_base_bdevs": 2, 00:09:46.942 "num_base_bdevs_discovered": 1, 00:09:46.942 "num_base_bdevs_operational": 2, 00:09:46.942 "base_bdevs_list": [ 00:09:46.942 { 00:09:46.942 "name": "BaseBdev1", 00:09:46.942 "uuid": "015cd83f-48bc-11ef-a06c-59ddad71024c", 00:09:46.942 "is_configured": true, 00:09:46.942 "data_offset": 0, 00:09:46.942 "data_size": 65536 00:09:46.942 }, 00:09:46.942 { 00:09:46.942 "name": "BaseBdev2", 00:09:46.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.942 "is_configured": false, 00:09:46.942 "data_offset": 0, 00:09:46.942 "data_size": 0 00:09:46.942 } 00:09:46.942 ] 00:09:46.942 }' 00:09:46.942 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.942 06:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.214 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:47.473 [2024-07-23 06:22:59.884879] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.473 [2024-07-23 06:22:59.884913] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1350a2c34500 name Existed_Raid, state configuring 00:09:47.473 06:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:47.731 [2024-07-23 06:23:00.116934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.731 [2024-07-23 06:23:00.117815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.731 [2024-07-23 06:23:00.117866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.731 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.989 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.989 "name": "Existed_Raid", 00:09:47.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.989 "strip_size_kb": 64, 00:09:47.989 "state": "configuring", 00:09:47.989 "raid_level": "concat", 00:09:47.989 "superblock": false, 00:09:47.989 "num_base_bdevs": 2, 00:09:47.989 "num_base_bdevs_discovered": 1, 00:09:47.989 "num_base_bdevs_operational": 2, 00:09:47.989 "base_bdevs_list": [ 00:09:47.989 { 00:09:47.989 "name": "BaseBdev1", 00:09:47.989 "uuid": "015cd83f-48bc-11ef-a06c-59ddad71024c", 00:09:47.989 "is_configured": true, 00:09:47.989 "data_offset": 0, 00:09:47.989 "data_size": 65536 00:09:47.989 }, 00:09:47.989 { 00:09:47.989 "name": "BaseBdev2", 00:09:47.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.989 "is_configured": false, 00:09:47.989 "data_offset": 0, 00:09:47.989 "data_size": 0 00:09:47.989 } 00:09:47.989 ] 00:09:47.989 }' 00:09:47.989 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.989 06:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.247 06:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.505 [2024-07-23 06:23:00.997149] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.505 [2024-07-23 06:23:00.997180] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1350a2c34a00 00:09:48.505 [2024-07-23 06:23:00.997184] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:48.505 [2024-07-23 06:23:00.997207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1350a2c97e20 00:09:48.505 [2024-07-23 06:23:00.997297] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1350a2c34a00 00:09:48.505 [2024-07-23 06:23:00.997301] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1350a2c34a00 00:09:48.505 [2024-07-23 06:23:00.997335] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.505 BaseBdev2 00:09:48.505 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:48.505 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:48.505 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:48.505 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:48.505 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:48.506 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:48.506 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.072 [ 00:09:49.072 { 00:09:49.072 "name": "BaseBdev2", 00:09:49.072 "aliases": [ 00:09:49.072 "02cf5c0b-48bc-11ef-a06c-59ddad71024c" 00:09:49.072 ], 00:09:49.072 "product_name": "Malloc disk", 00:09:49.072 "block_size": 512, 00:09:49.072 "num_blocks": 65536, 00:09:49.072 "uuid": "02cf5c0b-48bc-11ef-a06c-59ddad71024c", 00:09:49.072 "assigned_rate_limits": { 00:09:49.072 "rw_ios_per_sec": 0, 00:09:49.072 "rw_mbytes_per_sec": 0, 00:09:49.072 "r_mbytes_per_sec": 0, 00:09:49.072 "w_mbytes_per_sec": 0 00:09:49.072 }, 00:09:49.072 "claimed": true, 00:09:49.072 "claim_type": "exclusive_write", 00:09:49.072 "zoned": false, 00:09:49.072 "supported_io_types": { 00:09:49.072 "read": true, 00:09:49.072 "write": true, 00:09:49.072 "unmap": true, 00:09:49.072 "flush": true, 00:09:49.072 "reset": true, 00:09:49.072 "nvme_admin": false, 00:09:49.072 "nvme_io": false, 00:09:49.072 "nvme_io_md": false, 00:09:49.072 "write_zeroes": true, 00:09:49.072 "zcopy": true, 00:09:49.072 "get_zone_info": false, 00:09:49.072 "zone_management": false, 00:09:49.072 "zone_append": false, 00:09:49.072 "compare": false, 00:09:49.072 "compare_and_write": false, 00:09:49.072 "abort": true, 00:09:49.072 "seek_hole": false, 00:09:49.072 "seek_data": false, 00:09:49.072 "copy": true, 00:09:49.072 "nvme_iov_md": false 00:09:49.072 }, 00:09:49.072 "memory_domains": [ 00:09:49.072 { 00:09:49.072 "dma_device_id": "system", 00:09:49.072 "dma_device_type": 1 00:09:49.072 }, 00:09:49.072 { 00:09:49.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.072 "dma_device_type": 2 00:09:49.072 } 00:09:49.072 ], 00:09:49.072 "driver_specific": {} 00:09:49.072 } 00:09:49.072 ] 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.072 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.330 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:49.330 "name": "Existed_Raid", 00:09:49.330 "uuid": "02cf62be-48bc-11ef-a06c-59ddad71024c", 00:09:49.330 "strip_size_kb": 64, 00:09:49.330 "state": "online", 00:09:49.330 "raid_level": "concat", 00:09:49.330 "superblock": false, 00:09:49.330 "num_base_bdevs": 2, 00:09:49.330 "num_base_bdevs_discovered": 2, 00:09:49.330 "num_base_bdevs_operational": 2, 00:09:49.330 "base_bdevs_list": [ 00:09:49.330 { 00:09:49.330 "name": "BaseBdev1", 00:09:49.330 "uuid": "015cd83f-48bc-11ef-a06c-59ddad71024c", 00:09:49.330 "is_configured": true, 00:09:49.330 "data_offset": 0, 00:09:49.330 "data_size": 65536 00:09:49.330 }, 00:09:49.330 { 00:09:49.330 "name": "BaseBdev2", 00:09:49.330 "uuid": "02cf5c0b-48bc-11ef-a06c-59ddad71024c", 00:09:49.330 "is_configured": true, 00:09:49.330 "data_offset": 0, 00:09:49.331 "data_size": 65536 00:09:49.331 } 00:09:49.331 ] 00:09:49.331 }' 00:09:49.331 06:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:49.331 06:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:49.897 [2024-07-23 06:23:02.349178] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:49.897 "name": "Existed_Raid", 00:09:49.897 "aliases": [ 00:09:49.897 "02cf62be-48bc-11ef-a06c-59ddad71024c" 00:09:49.897 ], 00:09:49.897 "product_name": "Raid Volume", 00:09:49.897 "block_size": 512, 00:09:49.897 "num_blocks": 131072, 00:09:49.897 "uuid": "02cf62be-48bc-11ef-a06c-59ddad71024c", 00:09:49.897 "assigned_rate_limits": { 00:09:49.897 "rw_ios_per_sec": 0, 00:09:49.897 "rw_mbytes_per_sec": 0, 00:09:49.897 "r_mbytes_per_sec": 0, 00:09:49.897 "w_mbytes_per_sec": 0 00:09:49.897 }, 00:09:49.897 "claimed": false, 00:09:49.897 "zoned": false, 00:09:49.897 "supported_io_types": { 00:09:49.897 "read": true, 00:09:49.897 "write": true, 00:09:49.897 "unmap": true, 00:09:49.897 "flush": true, 00:09:49.897 "reset": true, 00:09:49.897 "nvme_admin": false, 00:09:49.897 "nvme_io": false, 00:09:49.897 "nvme_io_md": false, 00:09:49.897 "write_zeroes": true, 00:09:49.897 "zcopy": false, 00:09:49.897 "get_zone_info": false, 00:09:49.897 "zone_management": false, 00:09:49.897 "zone_append": false, 00:09:49.897 "compare": false, 00:09:49.897 "compare_and_write": false, 00:09:49.897 "abort": false, 00:09:49.897 "seek_hole": false, 00:09:49.897 "seek_data": false, 00:09:49.897 "copy": false, 00:09:49.897 "nvme_iov_md": false 00:09:49.897 }, 00:09:49.897 "memory_domains": [ 00:09:49.897 { 00:09:49.897 "dma_device_id": "system", 00:09:49.897 "dma_device_type": 1 00:09:49.897 }, 00:09:49.897 { 00:09:49.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.897 "dma_device_type": 2 00:09:49.897 }, 00:09:49.897 { 00:09:49.897 "dma_device_id": "system", 00:09:49.897 "dma_device_type": 1 00:09:49.897 }, 00:09:49.897 { 00:09:49.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.897 "dma_device_type": 2 00:09:49.897 } 00:09:49.897 ], 00:09:49.897 "driver_specific": { 00:09:49.897 "raid": { 00:09:49.897 "uuid": "02cf62be-48bc-11ef-a06c-59ddad71024c", 00:09:49.897 "strip_size_kb": 64, 00:09:49.897 "state": "online", 00:09:49.897 "raid_level": "concat", 00:09:49.897 "superblock": false, 00:09:49.897 "num_base_bdevs": 2, 00:09:49.897 "num_base_bdevs_discovered": 2, 00:09:49.897 "num_base_bdevs_operational": 2, 00:09:49.897 "base_bdevs_list": [ 00:09:49.897 { 00:09:49.897 "name": "BaseBdev1", 00:09:49.897 "uuid": "015cd83f-48bc-11ef-a06c-59ddad71024c", 00:09:49.897 "is_configured": true, 00:09:49.897 "data_offset": 0, 00:09:49.897 "data_size": 65536 00:09:49.897 }, 00:09:49.897 { 00:09:49.897 "name": "BaseBdev2", 00:09:49.897 "uuid": "02cf5c0b-48bc-11ef-a06c-59ddad71024c", 00:09:49.897 "is_configured": true, 00:09:49.897 "data_offset": 0, 00:09:49.897 "data_size": 65536 00:09:49.897 } 00:09:49.897 ] 00:09:49.897 } 00:09:49.897 } 00:09:49.897 }' 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:49.897 BaseBdev2' 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:49.897 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.155 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.155 "name": "BaseBdev1", 00:09:50.155 "aliases": [ 00:09:50.155 "015cd83f-48bc-11ef-a06c-59ddad71024c" 00:09:50.155 ], 00:09:50.155 "product_name": "Malloc disk", 00:09:50.155 "block_size": 512, 00:09:50.155 "num_blocks": 65536, 00:09:50.155 "uuid": "015cd83f-48bc-11ef-a06c-59ddad71024c", 00:09:50.155 "assigned_rate_limits": { 00:09:50.155 "rw_ios_per_sec": 0, 00:09:50.155 "rw_mbytes_per_sec": 0, 00:09:50.155 "r_mbytes_per_sec": 0, 00:09:50.155 "w_mbytes_per_sec": 0 00:09:50.155 }, 00:09:50.155 "claimed": true, 00:09:50.155 "claim_type": "exclusive_write", 00:09:50.155 "zoned": false, 00:09:50.155 "supported_io_types": { 00:09:50.155 "read": true, 00:09:50.155 "write": true, 00:09:50.155 "unmap": true, 00:09:50.155 "flush": true, 00:09:50.155 "reset": true, 00:09:50.155 "nvme_admin": false, 00:09:50.155 "nvme_io": false, 00:09:50.155 "nvme_io_md": false, 00:09:50.155 "write_zeroes": true, 00:09:50.155 "zcopy": true, 00:09:50.155 "get_zone_info": false, 00:09:50.155 "zone_management": false, 00:09:50.155 "zone_append": false, 00:09:50.155 "compare": false, 00:09:50.155 "compare_and_write": false, 00:09:50.155 "abort": true, 00:09:50.155 "seek_hole": false, 00:09:50.155 "seek_data": false, 00:09:50.155 "copy": true, 00:09:50.155 "nvme_iov_md": false 00:09:50.155 }, 00:09:50.155 "memory_domains": [ 00:09:50.155 { 00:09:50.155 "dma_device_id": "system", 00:09:50.155 "dma_device_type": 1 00:09:50.155 }, 00:09:50.155 { 00:09:50.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.155 "dma_device_type": 2 00:09:50.155 } 00:09:50.155 ], 00:09:50.155 "driver_specific": {} 00:09:50.155 }' 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.156 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.413 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.413 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.413 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:50.413 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.413 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:50.671 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.671 "name": "BaseBdev2", 00:09:50.671 "aliases": [ 00:09:50.671 "02cf5c0b-48bc-11ef-a06c-59ddad71024c" 00:09:50.671 ], 00:09:50.671 "product_name": "Malloc disk", 00:09:50.671 "block_size": 512, 00:09:50.671 "num_blocks": 65536, 00:09:50.671 "uuid": "02cf5c0b-48bc-11ef-a06c-59ddad71024c", 00:09:50.671 "assigned_rate_limits": { 00:09:50.671 "rw_ios_per_sec": 0, 00:09:50.671 "rw_mbytes_per_sec": 0, 00:09:50.671 "r_mbytes_per_sec": 0, 00:09:50.671 "w_mbytes_per_sec": 0 00:09:50.671 }, 00:09:50.671 "claimed": true, 00:09:50.671 "claim_type": "exclusive_write", 00:09:50.671 "zoned": false, 00:09:50.671 "supported_io_types": { 00:09:50.671 "read": true, 00:09:50.671 "write": true, 00:09:50.671 "unmap": true, 00:09:50.671 "flush": true, 00:09:50.671 "reset": true, 00:09:50.671 "nvme_admin": false, 00:09:50.671 "nvme_io": false, 00:09:50.671 "nvme_io_md": false, 00:09:50.671 "write_zeroes": true, 00:09:50.671 "zcopy": true, 00:09:50.671 "get_zone_info": false, 00:09:50.671 "zone_management": false, 00:09:50.671 "zone_append": false, 00:09:50.671 "compare": false, 00:09:50.671 "compare_and_write": false, 00:09:50.671 "abort": true, 00:09:50.671 "seek_hole": false, 00:09:50.671 "seek_data": false, 00:09:50.671 "copy": true, 00:09:50.671 "nvme_iov_md": false 00:09:50.671 }, 00:09:50.671 "memory_domains": [ 00:09:50.671 { 00:09:50.671 "dma_device_id": "system", 00:09:50.671 "dma_device_type": 1 00:09:50.671 }, 00:09:50.671 { 00:09:50.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.671 "dma_device_type": 2 00:09:50.671 } 00:09:50.671 ], 00:09:50.671 "driver_specific": {} 00:09:50.671 }' 00:09:50.671 06:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.672 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:50.942 [2024-07-23 06:23:03.273227] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.942 [2024-07-23 06:23:03.273255] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.942 [2024-07-23 06:23:03.273291] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.942 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.201 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.201 "name": "Existed_Raid", 00:09:51.201 "uuid": "02cf62be-48bc-11ef-a06c-59ddad71024c", 00:09:51.201 "strip_size_kb": 64, 00:09:51.201 "state": "offline", 00:09:51.201 "raid_level": "concat", 00:09:51.201 "superblock": false, 00:09:51.201 "num_base_bdevs": 2, 00:09:51.201 "num_base_bdevs_discovered": 1, 00:09:51.201 "num_base_bdevs_operational": 1, 00:09:51.201 "base_bdevs_list": [ 00:09:51.201 { 00:09:51.201 "name": null, 00:09:51.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.201 "is_configured": false, 00:09:51.201 "data_offset": 0, 00:09:51.201 "data_size": 65536 00:09:51.201 }, 00:09:51.201 { 00:09:51.201 "name": "BaseBdev2", 00:09:51.201 "uuid": "02cf5c0b-48bc-11ef-a06c-59ddad71024c", 00:09:51.201 "is_configured": true, 00:09:51.201 "data_offset": 0, 00:09:51.201 "data_size": 65536 00:09:51.201 } 00:09:51.201 ] 00:09:51.201 }' 00:09:51.201 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.201 06:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.460 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:51.460 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:51.460 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.460 06:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:51.718 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:51.718 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.718 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:51.977 [2024-07-23 06:23:04.363419] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.977 [2024-07-23 06:23:04.363447] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1350a2c34a00 name Existed_Raid, state offline 00:09:51.977 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:51.977 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:51.977 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.977 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49744 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49744 ']' 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49744 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49744 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:52.235 killing process with pid 49744 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49744' 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49744 00:09:52.235 [2024-07-23 06:23:04.667171] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.235 [2024-07-23 06:23:04.667203] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.235 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49744 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:52.495 00:09:52.495 real 0m8.904s 00:09:52.495 user 0m15.362s 00:09:52.495 sys 0m1.682s 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.495 ************************************ 00:09:52.495 END TEST raid_state_function_test 00:09:52.495 ************************************ 00:09:52.495 06:23:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:52.495 06:23:04 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:52.495 06:23:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:52.495 06:23:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.495 06:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.495 ************************************ 00:09:52.495 START TEST raid_state_function_test_sb 00:09:52.495 ************************************ 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=50015 00:09:52.495 Process raid pid: 50015 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50015' 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 50015 /var/tmp/spdk-raid.sock 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 50015 ']' 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.495 06:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.495 [2024-07-23 06:23:04.906386] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:52.496 [2024-07-23 06:23:04.906622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:53.115 EAL: TSC is not safe to use in SMP mode 00:09:53.115 EAL: TSC is not invariant 00:09:53.115 [2024-07-23 06:23:05.439382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.115 [2024-07-23 06:23:05.539675] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:53.115 [2024-07-23 06:23:05.542218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.115 [2024-07-23 06:23:05.543187] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.115 [2024-07-23 06:23:05.543205] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.681 06:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.681 06:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:53.681 06:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:53.681 [2024-07-23 06:23:06.144446] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.681 [2024-07-23 06:23:06.144527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.681 [2024-07-23 06:23:06.144533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.681 [2024-07-23 06:23:06.144559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.681 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.939 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.939 "name": "Existed_Raid", 00:09:53.939 "uuid": "05e0cb24-48bc-11ef-a06c-59ddad71024c", 00:09:53.939 "strip_size_kb": 64, 00:09:53.939 "state": "configuring", 00:09:53.939 "raid_level": "concat", 00:09:53.939 "superblock": true, 00:09:53.939 "num_base_bdevs": 2, 00:09:53.939 "num_base_bdevs_discovered": 0, 00:09:53.939 "num_base_bdevs_operational": 2, 00:09:53.939 "base_bdevs_list": [ 00:09:53.939 { 00:09:53.939 "name": "BaseBdev1", 00:09:53.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.939 "is_configured": false, 00:09:53.940 "data_offset": 0, 00:09:53.940 "data_size": 0 00:09:53.940 }, 00:09:53.940 { 00:09:53.940 "name": "BaseBdev2", 00:09:53.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.940 "is_configured": false, 00:09:53.940 "data_offset": 0, 00:09:53.940 "data_size": 0 00:09:53.940 } 00:09:53.940 ] 00:09:53.940 }' 00:09:53.940 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.940 06:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.505 06:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:54.505 [2024-07-23 06:23:07.000483] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.505 [2024-07-23 06:23:07.000512] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b9505834500 name Existed_Raid, state configuring 00:09:54.505 06:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:54.763 [2024-07-23 06:23:07.232518] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.763 [2024-07-23 06:23:07.232587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.763 [2024-07-23 06:23:07.232593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.763 [2024-07-23 06:23:07.232602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.763 06:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.020 [2024-07-23 06:23:07.517569] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.020 BaseBdev1 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:55.020 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:55.278 06:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.536 [ 00:09:55.536 { 00:09:55.536 "name": "BaseBdev1", 00:09:55.536 "aliases": [ 00:09:55.536 "06b228bc-48bc-11ef-a06c-59ddad71024c" 00:09:55.536 ], 00:09:55.536 "product_name": "Malloc disk", 00:09:55.536 "block_size": 512, 00:09:55.536 "num_blocks": 65536, 00:09:55.536 "uuid": "06b228bc-48bc-11ef-a06c-59ddad71024c", 00:09:55.536 "assigned_rate_limits": { 00:09:55.536 "rw_ios_per_sec": 0, 00:09:55.536 "rw_mbytes_per_sec": 0, 00:09:55.536 "r_mbytes_per_sec": 0, 00:09:55.536 "w_mbytes_per_sec": 0 00:09:55.536 }, 00:09:55.536 "claimed": true, 00:09:55.536 "claim_type": "exclusive_write", 00:09:55.536 "zoned": false, 00:09:55.536 "supported_io_types": { 00:09:55.536 "read": true, 00:09:55.536 "write": true, 00:09:55.536 "unmap": true, 00:09:55.536 "flush": true, 00:09:55.536 "reset": true, 00:09:55.536 "nvme_admin": false, 00:09:55.536 "nvme_io": false, 00:09:55.536 "nvme_io_md": false, 00:09:55.536 "write_zeroes": true, 00:09:55.536 "zcopy": true, 00:09:55.536 "get_zone_info": false, 00:09:55.536 "zone_management": false, 00:09:55.536 "zone_append": false, 00:09:55.536 "compare": false, 00:09:55.536 "compare_and_write": false, 00:09:55.536 "abort": true, 00:09:55.536 "seek_hole": false, 00:09:55.536 "seek_data": false, 00:09:55.536 "copy": true, 00:09:55.536 "nvme_iov_md": false 00:09:55.536 }, 00:09:55.536 "memory_domains": [ 00:09:55.536 { 00:09:55.537 "dma_device_id": "system", 00:09:55.537 "dma_device_type": 1 00:09:55.537 }, 00:09:55.537 { 00:09:55.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.537 "dma_device_type": 2 00:09:55.537 } 00:09:55.537 ], 00:09:55.537 "driver_specific": {} 00:09:55.537 } 00:09:55.537 ] 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.537 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.795 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.795 "name": "Existed_Raid", 00:09:55.795 "uuid": "0686d1fb-48bc-11ef-a06c-59ddad71024c", 00:09:55.795 "strip_size_kb": 64, 00:09:55.795 "state": "configuring", 00:09:55.795 "raid_level": "concat", 00:09:55.795 "superblock": true, 00:09:55.795 "num_base_bdevs": 2, 00:09:55.795 "num_base_bdevs_discovered": 1, 00:09:55.795 "num_base_bdevs_operational": 2, 00:09:55.795 "base_bdevs_list": [ 00:09:55.795 { 00:09:55.795 "name": "BaseBdev1", 00:09:55.795 "uuid": "06b228bc-48bc-11ef-a06c-59ddad71024c", 00:09:55.795 "is_configured": true, 00:09:55.795 "data_offset": 2048, 00:09:55.795 "data_size": 63488 00:09:55.795 }, 00:09:55.795 { 00:09:55.795 "name": "BaseBdev2", 00:09:55.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.795 "is_configured": false, 00:09:55.795 "data_offset": 0, 00:09:55.795 "data_size": 0 00:09:55.795 } 00:09:55.795 ] 00:09:55.795 }' 00:09:55.795 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.795 06:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.053 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:56.311 [2024-07-23 06:23:08.784634] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.311 [2024-07-23 06:23:08.784668] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b9505834500 name Existed_Raid, state configuring 00:09:56.311 06:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:56.570 [2024-07-23 06:23:09.068682] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.570 [2024-07-23 06:23:09.069566] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.570 [2024-07-23 06:23:09.069614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.570 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.828 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.828 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.086 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.086 "name": "Existed_Raid", 00:09:57.086 "uuid": "079efefd-48bc-11ef-a06c-59ddad71024c", 00:09:57.086 "strip_size_kb": 64, 00:09:57.086 "state": "configuring", 00:09:57.086 "raid_level": "concat", 00:09:57.086 "superblock": true, 00:09:57.086 "num_base_bdevs": 2, 00:09:57.086 "num_base_bdevs_discovered": 1, 00:09:57.086 "num_base_bdevs_operational": 2, 00:09:57.086 "base_bdevs_list": [ 00:09:57.086 { 00:09:57.086 "name": "BaseBdev1", 00:09:57.086 "uuid": "06b228bc-48bc-11ef-a06c-59ddad71024c", 00:09:57.086 "is_configured": true, 00:09:57.086 "data_offset": 2048, 00:09:57.086 "data_size": 63488 00:09:57.086 }, 00:09:57.086 { 00:09:57.086 "name": "BaseBdev2", 00:09:57.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.086 "is_configured": false, 00:09:57.086 "data_offset": 0, 00:09:57.086 "data_size": 0 00:09:57.086 } 00:09:57.086 ] 00:09:57.086 }' 00:09:57.086 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.086 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.602 [2024-07-23 06:23:09.904894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.602 [2024-07-23 06:23:09.904957] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2b9505834a00 00:09:57.602 [2024-07-23 06:23:09.904964] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:57.602 [2024-07-23 06:23:09.904985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b9505897e20 00:09:57.602 [2024-07-23 06:23:09.905029] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2b9505834a00 00:09:57.602 [2024-07-23 06:23:09.905033] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2b9505834a00 00:09:57.602 [2024-07-23 06:23:09.905053] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.602 BaseBdev2 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:57.602 06:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:57.859 06:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.117 [ 00:09:58.117 { 00:09:58.117 "name": "BaseBdev2", 00:09:58.117 "aliases": [ 00:09:58.117 "081e9198-48bc-11ef-a06c-59ddad71024c" 00:09:58.117 ], 00:09:58.117 "product_name": "Malloc disk", 00:09:58.117 "block_size": 512, 00:09:58.117 "num_blocks": 65536, 00:09:58.117 "uuid": "081e9198-48bc-11ef-a06c-59ddad71024c", 00:09:58.117 "assigned_rate_limits": { 00:09:58.117 "rw_ios_per_sec": 0, 00:09:58.117 "rw_mbytes_per_sec": 0, 00:09:58.117 "r_mbytes_per_sec": 0, 00:09:58.117 "w_mbytes_per_sec": 0 00:09:58.117 }, 00:09:58.117 "claimed": true, 00:09:58.117 "claim_type": "exclusive_write", 00:09:58.117 "zoned": false, 00:09:58.117 "supported_io_types": { 00:09:58.117 "read": true, 00:09:58.117 "write": true, 00:09:58.117 "unmap": true, 00:09:58.117 "flush": true, 00:09:58.117 "reset": true, 00:09:58.117 "nvme_admin": false, 00:09:58.117 "nvme_io": false, 00:09:58.117 "nvme_io_md": false, 00:09:58.117 "write_zeroes": true, 00:09:58.117 "zcopy": true, 00:09:58.117 "get_zone_info": false, 00:09:58.117 "zone_management": false, 00:09:58.117 "zone_append": false, 00:09:58.117 "compare": false, 00:09:58.117 "compare_and_write": false, 00:09:58.117 "abort": true, 00:09:58.117 "seek_hole": false, 00:09:58.117 "seek_data": false, 00:09:58.117 "copy": true, 00:09:58.117 "nvme_iov_md": false 00:09:58.117 }, 00:09:58.117 "memory_domains": [ 00:09:58.117 { 00:09:58.117 "dma_device_id": "system", 00:09:58.117 "dma_device_type": 1 00:09:58.117 }, 00:09:58.117 { 00:09:58.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.117 "dma_device_type": 2 00:09:58.117 } 00:09:58.117 ], 00:09:58.117 "driver_specific": {} 00:09:58.117 } 00:09:58.117 ] 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.117 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.386 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.386 "name": "Existed_Raid", 00:09:58.386 "uuid": "079efefd-48bc-11ef-a06c-59ddad71024c", 00:09:58.386 "strip_size_kb": 64, 00:09:58.386 "state": "online", 00:09:58.386 "raid_level": "concat", 00:09:58.386 "superblock": true, 00:09:58.386 "num_base_bdevs": 2, 00:09:58.386 "num_base_bdevs_discovered": 2, 00:09:58.386 "num_base_bdevs_operational": 2, 00:09:58.386 "base_bdevs_list": [ 00:09:58.386 { 00:09:58.386 "name": "BaseBdev1", 00:09:58.386 "uuid": "06b228bc-48bc-11ef-a06c-59ddad71024c", 00:09:58.386 "is_configured": true, 00:09:58.386 "data_offset": 2048, 00:09:58.386 "data_size": 63488 00:09:58.386 }, 00:09:58.386 { 00:09:58.386 "name": "BaseBdev2", 00:09:58.386 "uuid": "081e9198-48bc-11ef-a06c-59ddad71024c", 00:09:58.386 "is_configured": true, 00:09:58.386 "data_offset": 2048, 00:09:58.386 "data_size": 63488 00:09:58.386 } 00:09:58.386 ] 00:09:58.386 }' 00:09:58.386 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.386 06:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:58.646 06:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:58.903 [2024-07-23 06:23:11.184854] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.903 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:58.903 "name": "Existed_Raid", 00:09:58.903 "aliases": [ 00:09:58.903 "079efefd-48bc-11ef-a06c-59ddad71024c" 00:09:58.903 ], 00:09:58.903 "product_name": "Raid Volume", 00:09:58.903 "block_size": 512, 00:09:58.903 "num_blocks": 126976, 00:09:58.903 "uuid": "079efefd-48bc-11ef-a06c-59ddad71024c", 00:09:58.903 "assigned_rate_limits": { 00:09:58.903 "rw_ios_per_sec": 0, 00:09:58.903 "rw_mbytes_per_sec": 0, 00:09:58.903 "r_mbytes_per_sec": 0, 00:09:58.903 "w_mbytes_per_sec": 0 00:09:58.903 }, 00:09:58.903 "claimed": false, 00:09:58.903 "zoned": false, 00:09:58.903 "supported_io_types": { 00:09:58.903 "read": true, 00:09:58.903 "write": true, 00:09:58.903 "unmap": true, 00:09:58.903 "flush": true, 00:09:58.903 "reset": true, 00:09:58.903 "nvme_admin": false, 00:09:58.903 "nvme_io": false, 00:09:58.903 "nvme_io_md": false, 00:09:58.903 "write_zeroes": true, 00:09:58.903 "zcopy": false, 00:09:58.903 "get_zone_info": false, 00:09:58.903 "zone_management": false, 00:09:58.903 "zone_append": false, 00:09:58.903 "compare": false, 00:09:58.903 "compare_and_write": false, 00:09:58.903 "abort": false, 00:09:58.903 "seek_hole": false, 00:09:58.903 "seek_data": false, 00:09:58.903 "copy": false, 00:09:58.903 "nvme_iov_md": false 00:09:58.903 }, 00:09:58.903 "memory_domains": [ 00:09:58.903 { 00:09:58.903 "dma_device_id": "system", 00:09:58.903 "dma_device_type": 1 00:09:58.903 }, 00:09:58.903 { 00:09:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.903 "dma_device_type": 2 00:09:58.903 }, 00:09:58.903 { 00:09:58.903 "dma_device_id": "system", 00:09:58.903 "dma_device_type": 1 00:09:58.903 }, 00:09:58.903 { 00:09:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.903 "dma_device_type": 2 00:09:58.903 } 00:09:58.903 ], 00:09:58.903 "driver_specific": { 00:09:58.903 "raid": { 00:09:58.903 "uuid": "079efefd-48bc-11ef-a06c-59ddad71024c", 00:09:58.903 "strip_size_kb": 64, 00:09:58.903 "state": "online", 00:09:58.903 "raid_level": "concat", 00:09:58.903 "superblock": true, 00:09:58.903 "num_base_bdevs": 2, 00:09:58.903 "num_base_bdevs_discovered": 2, 00:09:58.903 "num_base_bdevs_operational": 2, 00:09:58.903 "base_bdevs_list": [ 00:09:58.903 { 00:09:58.903 "name": "BaseBdev1", 00:09:58.903 "uuid": "06b228bc-48bc-11ef-a06c-59ddad71024c", 00:09:58.903 "is_configured": true, 00:09:58.903 "data_offset": 2048, 00:09:58.903 "data_size": 63488 00:09:58.903 }, 00:09:58.903 { 00:09:58.903 "name": "BaseBdev2", 00:09:58.903 "uuid": "081e9198-48bc-11ef-a06c-59ddad71024c", 00:09:58.903 "is_configured": true, 00:09:58.903 "data_offset": 2048, 00:09:58.903 "data_size": 63488 00:09:58.903 } 00:09:58.903 ] 00:09:58.903 } 00:09:58.903 } 00:09:58.903 }' 00:09:58.903 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.903 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:58.903 BaseBdev2' 00:09:58.903 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.903 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:58.903 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.161 "name": "BaseBdev1", 00:09:59.161 "aliases": [ 00:09:59.161 "06b228bc-48bc-11ef-a06c-59ddad71024c" 00:09:59.161 ], 00:09:59.161 "product_name": "Malloc disk", 00:09:59.161 "block_size": 512, 00:09:59.161 "num_blocks": 65536, 00:09:59.161 "uuid": "06b228bc-48bc-11ef-a06c-59ddad71024c", 00:09:59.161 "assigned_rate_limits": { 00:09:59.161 "rw_ios_per_sec": 0, 00:09:59.161 "rw_mbytes_per_sec": 0, 00:09:59.161 "r_mbytes_per_sec": 0, 00:09:59.161 "w_mbytes_per_sec": 0 00:09:59.161 }, 00:09:59.161 "claimed": true, 00:09:59.161 "claim_type": "exclusive_write", 00:09:59.161 "zoned": false, 00:09:59.161 "supported_io_types": { 00:09:59.161 "read": true, 00:09:59.161 "write": true, 00:09:59.161 "unmap": true, 00:09:59.161 "flush": true, 00:09:59.161 "reset": true, 00:09:59.161 "nvme_admin": false, 00:09:59.161 "nvme_io": false, 00:09:59.161 "nvme_io_md": false, 00:09:59.161 "write_zeroes": true, 00:09:59.161 "zcopy": true, 00:09:59.161 "get_zone_info": false, 00:09:59.161 "zone_management": false, 00:09:59.161 "zone_append": false, 00:09:59.161 "compare": false, 00:09:59.161 "compare_and_write": false, 00:09:59.161 "abort": true, 00:09:59.161 "seek_hole": false, 00:09:59.161 "seek_data": false, 00:09:59.161 "copy": true, 00:09:59.161 "nvme_iov_md": false 00:09:59.161 }, 00:09:59.161 "memory_domains": [ 00:09:59.161 { 00:09:59.161 "dma_device_id": "system", 00:09:59.161 "dma_device_type": 1 00:09:59.161 }, 00:09:59.161 { 00:09:59.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.161 "dma_device_type": 2 00:09:59.161 } 00:09:59.161 ], 00:09:59.161 "driver_specific": {} 00:09:59.161 }' 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.161 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.476 "name": "BaseBdev2", 00:09:59.476 "aliases": [ 00:09:59.476 "081e9198-48bc-11ef-a06c-59ddad71024c" 00:09:59.476 ], 00:09:59.476 "product_name": "Malloc disk", 00:09:59.476 "block_size": 512, 00:09:59.476 "num_blocks": 65536, 00:09:59.476 "uuid": "081e9198-48bc-11ef-a06c-59ddad71024c", 00:09:59.476 "assigned_rate_limits": { 00:09:59.476 "rw_ios_per_sec": 0, 00:09:59.476 "rw_mbytes_per_sec": 0, 00:09:59.476 "r_mbytes_per_sec": 0, 00:09:59.476 "w_mbytes_per_sec": 0 00:09:59.476 }, 00:09:59.476 "claimed": true, 00:09:59.476 "claim_type": "exclusive_write", 00:09:59.476 "zoned": false, 00:09:59.476 "supported_io_types": { 00:09:59.476 "read": true, 00:09:59.476 "write": true, 00:09:59.476 "unmap": true, 00:09:59.476 "flush": true, 00:09:59.476 "reset": true, 00:09:59.476 "nvme_admin": false, 00:09:59.476 "nvme_io": false, 00:09:59.476 "nvme_io_md": false, 00:09:59.476 "write_zeroes": true, 00:09:59.476 "zcopy": true, 00:09:59.476 "get_zone_info": false, 00:09:59.476 "zone_management": false, 00:09:59.476 "zone_append": false, 00:09:59.476 "compare": false, 00:09:59.476 "compare_and_write": false, 00:09:59.476 "abort": true, 00:09:59.476 "seek_hole": false, 00:09:59.476 "seek_data": false, 00:09:59.476 "copy": true, 00:09:59.476 "nvme_iov_md": false 00:09:59.476 }, 00:09:59.476 "memory_domains": [ 00:09:59.476 { 00:09:59.476 "dma_device_id": "system", 00:09:59.476 "dma_device_type": 1 00:09:59.476 }, 00:09:59.476 { 00:09:59.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.476 "dma_device_type": 2 00:09:59.476 } 00:09:59.476 ], 00:09:59.476 "driver_specific": {} 00:09:59.476 }' 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.476 06:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:59.743 [2024-07-23 06:23:12.104894] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.743 [2024-07-23 06:23:12.104921] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.743 [2024-07-23 06:23:12.104936] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.743 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.001 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.001 "name": "Existed_Raid", 00:10:00.001 "uuid": "079efefd-48bc-11ef-a06c-59ddad71024c", 00:10:00.001 "strip_size_kb": 64, 00:10:00.001 "state": "offline", 00:10:00.001 "raid_level": "concat", 00:10:00.001 "superblock": true, 00:10:00.001 "num_base_bdevs": 2, 00:10:00.001 "num_base_bdevs_discovered": 1, 00:10:00.001 "num_base_bdevs_operational": 1, 00:10:00.001 "base_bdevs_list": [ 00:10:00.001 { 00:10:00.001 "name": null, 00:10:00.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.001 "is_configured": false, 00:10:00.001 "data_offset": 2048, 00:10:00.001 "data_size": 63488 00:10:00.001 }, 00:10:00.001 { 00:10:00.001 "name": "BaseBdev2", 00:10:00.001 "uuid": "081e9198-48bc-11ef-a06c-59ddad71024c", 00:10:00.001 "is_configured": true, 00:10:00.001 "data_offset": 2048, 00:10:00.001 "data_size": 63488 00:10:00.001 } 00:10:00.001 ] 00:10:00.001 }' 00:10:00.001 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.001 06:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.259 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:00.259 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:00.259 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.259 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:00.516 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:00.516 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.516 06:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:00.774 [2024-07-23 06:23:13.198856] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.774 [2024-07-23 06:23:13.198924] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b9505834a00 name Existed_Raid, state offline 00:10:00.774 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:00.774 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:00.774 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.774 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 50015 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 50015 ']' 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 50015 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 50015 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:01.032 killing process with pid 50015 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50015' 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 50015 00:10:01.032 [2024-07-23 06:23:13.462181] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.032 [2024-07-23 06:23:13.462216] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.032 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 50015 00:10:01.291 06:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:01.291 00:10:01.291 real 0m8.752s 00:10:01.291 user 0m15.236s 00:10:01.291 sys 0m1.509s 00:10:01.291 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.292 06:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 ************************************ 00:10:01.292 END TEST raid_state_function_test_sb 00:10:01.292 ************************************ 00:10:01.292 06:23:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:01.292 06:23:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:01.292 06:23:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:01.292 06:23:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.292 06:23:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 ************************************ 00:10:01.292 START TEST raid_superblock_test 00:10:01.292 ************************************ 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50285 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50285 /var/tmp/spdk-raid.sock 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50285 ']' 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.292 06:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 [2024-07-23 06:23:13.704459] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:01.292 [2024-07-23 06:23:13.704715] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:01.882 EAL: TSC is not safe to use in SMP mode 00:10:01.882 EAL: TSC is not invariant 00:10:01.882 [2024-07-23 06:23:14.262741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.882 [2024-07-23 06:23:14.361541] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:01.882 [2024-07-23 06:23:14.364078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.882 [2024-07-23 06:23:14.365006] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.882 [2024-07-23 06:23:14.365024] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.456 06:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:02.715 malloc1 00:10:02.715 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.974 [2024-07-23 06:23:15.306268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.974 [2024-07-23 06:23:15.306344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.974 [2024-07-23 06:23:15.306373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378806434780 00:10:02.974 [2024-07-23 06:23:15.306381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.974 [2024-07-23 06:23:15.307293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.974 [2024-07-23 06:23:15.307335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.974 pt1 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.974 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:03.232 malloc2 00:10:03.232 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:03.491 [2024-07-23 06:23:15.822338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:03.491 [2024-07-23 06:23:15.822405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.491 [2024-07-23 06:23:15.822433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378806434c80 00:10:03.491 [2024-07-23 06:23:15.822441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.491 [2024-07-23 06:23:15.823093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.491 [2024-07-23 06:23:15.823117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:03.491 pt2 00:10:03.491 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:03.491 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:03.491 06:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:03.778 [2024-07-23 06:23:16.062348] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:03.778 [2024-07-23 06:23:16.062966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:03.778 [2024-07-23 06:23:16.063018] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x378806434f00 00:10:03.778 [2024-07-23 06:23:16.063024] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:03.778 [2024-07-23 06:23:16.063061] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378806497e20 00:10:03.778 [2024-07-23 06:23:16.063139] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378806434f00 00:10:03.778 [2024-07-23 06:23:16.063143] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x378806434f00 00:10:03.778 [2024-07-23 06:23:16.063170] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.778 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.037 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.037 "name": "raid_bdev1", 00:10:04.037 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:04.037 "strip_size_kb": 64, 00:10:04.037 "state": "online", 00:10:04.037 "raid_level": "concat", 00:10:04.037 "superblock": true, 00:10:04.037 "num_base_bdevs": 2, 00:10:04.037 "num_base_bdevs_discovered": 2, 00:10:04.037 "num_base_bdevs_operational": 2, 00:10:04.037 "base_bdevs_list": [ 00:10:04.037 { 00:10:04.037 "name": "pt1", 00:10:04.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.037 "is_configured": true, 00:10:04.037 "data_offset": 2048, 00:10:04.037 "data_size": 63488 00:10:04.037 }, 00:10:04.037 { 00:10:04.037 "name": "pt2", 00:10:04.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.037 "is_configured": true, 00:10:04.037 "data_offset": 2048, 00:10:04.037 "data_size": 63488 00:10:04.037 } 00:10:04.037 ] 00:10:04.037 }' 00:10:04.037 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.037 06:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:04.296 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:04.554 [2024-07-23 06:23:16.874432] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.554 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:04.554 "name": "raid_bdev1", 00:10:04.555 "aliases": [ 00:10:04.555 "0bca2514-48bc-11ef-a06c-59ddad71024c" 00:10:04.555 ], 00:10:04.555 "product_name": "Raid Volume", 00:10:04.555 "block_size": 512, 00:10:04.555 "num_blocks": 126976, 00:10:04.555 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:04.555 "assigned_rate_limits": { 00:10:04.555 "rw_ios_per_sec": 0, 00:10:04.555 "rw_mbytes_per_sec": 0, 00:10:04.555 "r_mbytes_per_sec": 0, 00:10:04.555 "w_mbytes_per_sec": 0 00:10:04.555 }, 00:10:04.555 "claimed": false, 00:10:04.555 "zoned": false, 00:10:04.555 "supported_io_types": { 00:10:04.555 "read": true, 00:10:04.555 "write": true, 00:10:04.555 "unmap": true, 00:10:04.555 "flush": true, 00:10:04.555 "reset": true, 00:10:04.555 "nvme_admin": false, 00:10:04.555 "nvme_io": false, 00:10:04.555 "nvme_io_md": false, 00:10:04.555 "write_zeroes": true, 00:10:04.555 "zcopy": false, 00:10:04.555 "get_zone_info": false, 00:10:04.555 "zone_management": false, 00:10:04.555 "zone_append": false, 00:10:04.555 "compare": false, 00:10:04.555 "compare_and_write": false, 00:10:04.555 "abort": false, 00:10:04.555 "seek_hole": false, 00:10:04.555 "seek_data": false, 00:10:04.555 "copy": false, 00:10:04.555 "nvme_iov_md": false 00:10:04.555 }, 00:10:04.555 "memory_domains": [ 00:10:04.555 { 00:10:04.555 "dma_device_id": "system", 00:10:04.555 "dma_device_type": 1 00:10:04.555 }, 00:10:04.555 { 00:10:04.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.555 "dma_device_type": 2 00:10:04.555 }, 00:10:04.555 { 00:10:04.555 "dma_device_id": "system", 00:10:04.555 "dma_device_type": 1 00:10:04.555 }, 00:10:04.555 { 00:10:04.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.555 "dma_device_type": 2 00:10:04.555 } 00:10:04.555 ], 00:10:04.555 "driver_specific": { 00:10:04.555 "raid": { 00:10:04.555 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:04.555 "strip_size_kb": 64, 00:10:04.555 "state": "online", 00:10:04.555 "raid_level": "concat", 00:10:04.555 "superblock": true, 00:10:04.555 "num_base_bdevs": 2, 00:10:04.555 "num_base_bdevs_discovered": 2, 00:10:04.555 "num_base_bdevs_operational": 2, 00:10:04.555 "base_bdevs_list": [ 00:10:04.555 { 00:10:04.555 "name": "pt1", 00:10:04.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.555 "is_configured": true, 00:10:04.555 "data_offset": 2048, 00:10:04.555 "data_size": 63488 00:10:04.555 }, 00:10:04.555 { 00:10:04.555 "name": "pt2", 00:10:04.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.555 "is_configured": true, 00:10:04.555 "data_offset": 2048, 00:10:04.555 "data_size": 63488 00:10:04.555 } 00:10:04.555 ] 00:10:04.555 } 00:10:04.555 } 00:10:04.555 }' 00:10:04.555 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.555 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:04.555 pt2' 00:10:04.555 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:04.555 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:04.555 06:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:04.814 "name": "pt1", 00:10:04.814 "aliases": [ 00:10:04.814 "00000000-0000-0000-0000-000000000001" 00:10:04.814 ], 00:10:04.814 "product_name": "passthru", 00:10:04.814 "block_size": 512, 00:10:04.814 "num_blocks": 65536, 00:10:04.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.814 "assigned_rate_limits": { 00:10:04.814 "rw_ios_per_sec": 0, 00:10:04.814 "rw_mbytes_per_sec": 0, 00:10:04.814 "r_mbytes_per_sec": 0, 00:10:04.814 "w_mbytes_per_sec": 0 00:10:04.814 }, 00:10:04.814 "claimed": true, 00:10:04.814 "claim_type": "exclusive_write", 00:10:04.814 "zoned": false, 00:10:04.814 "supported_io_types": { 00:10:04.814 "read": true, 00:10:04.814 "write": true, 00:10:04.814 "unmap": true, 00:10:04.814 "flush": true, 00:10:04.814 "reset": true, 00:10:04.814 "nvme_admin": false, 00:10:04.814 "nvme_io": false, 00:10:04.814 "nvme_io_md": false, 00:10:04.814 "write_zeroes": true, 00:10:04.814 "zcopy": true, 00:10:04.814 "get_zone_info": false, 00:10:04.814 "zone_management": false, 00:10:04.814 "zone_append": false, 00:10:04.814 "compare": false, 00:10:04.814 "compare_and_write": false, 00:10:04.814 "abort": true, 00:10:04.814 "seek_hole": false, 00:10:04.814 "seek_data": false, 00:10:04.814 "copy": true, 00:10:04.814 "nvme_iov_md": false 00:10:04.814 }, 00:10:04.814 "memory_domains": [ 00:10:04.814 { 00:10:04.814 "dma_device_id": "system", 00:10:04.814 "dma_device_type": 1 00:10:04.814 }, 00:10:04.814 { 00:10:04.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.814 "dma_device_type": 2 00:10:04.814 } 00:10:04.814 ], 00:10:04.814 "driver_specific": { 00:10:04.814 "passthru": { 00:10:04.814 "name": "pt1", 00:10:04.814 "base_bdev_name": "malloc1" 00:10:04.814 } 00:10:04.814 } 00:10:04.814 }' 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:04.814 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:05.072 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:05.072 "name": "pt2", 00:10:05.072 "aliases": [ 00:10:05.072 "00000000-0000-0000-0000-000000000002" 00:10:05.072 ], 00:10:05.072 "product_name": "passthru", 00:10:05.072 "block_size": 512, 00:10:05.072 "num_blocks": 65536, 00:10:05.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.072 "assigned_rate_limits": { 00:10:05.072 "rw_ios_per_sec": 0, 00:10:05.073 "rw_mbytes_per_sec": 0, 00:10:05.073 "r_mbytes_per_sec": 0, 00:10:05.073 "w_mbytes_per_sec": 0 00:10:05.073 }, 00:10:05.073 "claimed": true, 00:10:05.073 "claim_type": "exclusive_write", 00:10:05.073 "zoned": false, 00:10:05.073 "supported_io_types": { 00:10:05.073 "read": true, 00:10:05.073 "write": true, 00:10:05.073 "unmap": true, 00:10:05.073 "flush": true, 00:10:05.073 "reset": true, 00:10:05.073 "nvme_admin": false, 00:10:05.073 "nvme_io": false, 00:10:05.073 "nvme_io_md": false, 00:10:05.073 "write_zeroes": true, 00:10:05.073 "zcopy": true, 00:10:05.073 "get_zone_info": false, 00:10:05.073 "zone_management": false, 00:10:05.073 "zone_append": false, 00:10:05.073 "compare": false, 00:10:05.073 "compare_and_write": false, 00:10:05.073 "abort": true, 00:10:05.073 "seek_hole": false, 00:10:05.073 "seek_data": false, 00:10:05.073 "copy": true, 00:10:05.073 "nvme_iov_md": false 00:10:05.073 }, 00:10:05.073 "memory_domains": [ 00:10:05.073 { 00:10:05.073 "dma_device_id": "system", 00:10:05.073 "dma_device_type": 1 00:10:05.073 }, 00:10:05.073 { 00:10:05.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.073 "dma_device_type": 2 00:10:05.073 } 00:10:05.073 ], 00:10:05.073 "driver_specific": { 00:10:05.073 "passthru": { 00:10:05.073 "name": "pt2", 00:10:05.073 "base_bdev_name": "malloc2" 00:10:05.073 } 00:10:05.073 } 00:10:05.073 }' 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:05.073 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:05.332 [2024-07-23 06:23:17.810512] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.332 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0bca2514-48bc-11ef-a06c-59ddad71024c 00:10:05.332 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0bca2514-48bc-11ef-a06c-59ddad71024c ']' 00:10:05.332 06:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:05.590 [2024-07-23 06:23:18.090490] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.590 [2024-07-23 06:23:18.090516] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.590 [2024-07-23 06:23:18.090555] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.591 [2024-07-23 06:23:18.090568] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.591 [2024-07-23 06:23:18.090573] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378806434f00 name raid_bdev1, state offline 00:10:05.858 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.858 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:06.131 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:06.131 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:06.131 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.131 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:06.390 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.390 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:06.648 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:06.648 06:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:06.906 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.165 [2024-07-23 06:23:19.442580] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:07.165 [2024-07-23 06:23:19.443176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:07.165 [2024-07-23 06:23:19.443201] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:07.165 [2024-07-23 06:23:19.443237] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:07.165 [2024-07-23 06:23:19.443248] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.165 [2024-07-23 06:23:19.443253] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378806434c80 name raid_bdev1, state configuring 00:10:07.165 request: 00:10:07.165 { 00:10:07.165 "name": "raid_bdev1", 00:10:07.165 "raid_level": "concat", 00:10:07.165 "base_bdevs": [ 00:10:07.165 "malloc1", 00:10:07.165 "malloc2" 00:10:07.165 ], 00:10:07.165 "strip_size_kb": 64, 00:10:07.165 "superblock": false, 00:10:07.165 "method": "bdev_raid_create", 00:10:07.165 "req_id": 1 00:10:07.165 } 00:10:07.165 Got JSON-RPC error response 00:10:07.165 response: 00:10:07.165 { 00:10:07.165 "code": -17, 00:10:07.165 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:07.165 } 00:10:07.165 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:07.165 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:07.165 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:07.165 06:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:07.165 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:07.165 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.424 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:07.424 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:07.424 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.682 [2024-07-23 06:23:19.974602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.682 [2024-07-23 06:23:19.974661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.682 [2024-07-23 06:23:19.974673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378806434780 00:10:07.682 [2024-07-23 06:23:19.974681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.682 [2024-07-23 06:23:19.975341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.682 [2024-07-23 06:23:19.975366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.682 [2024-07-23 06:23:19.975390] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:07.682 [2024-07-23 06:23:19.975402] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.682 pt1 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.682 06:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.955 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:07.955 "name": "raid_bdev1", 00:10:07.955 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:07.955 "strip_size_kb": 64, 00:10:07.955 "state": "configuring", 00:10:07.955 "raid_level": "concat", 00:10:07.955 "superblock": true, 00:10:07.955 "num_base_bdevs": 2, 00:10:07.955 "num_base_bdevs_discovered": 1, 00:10:07.955 "num_base_bdevs_operational": 2, 00:10:07.955 "base_bdevs_list": [ 00:10:07.955 { 00:10:07.955 "name": "pt1", 00:10:07.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.955 "is_configured": true, 00:10:07.955 "data_offset": 2048, 00:10:07.955 "data_size": 63488 00:10:07.955 }, 00:10:07.955 { 00:10:07.955 "name": null, 00:10:07.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.955 "is_configured": false, 00:10:07.955 "data_offset": 2048, 00:10:07.955 "data_size": 63488 00:10:07.955 } 00:10:07.955 ] 00:10:07.955 }' 00:10:07.955 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:07.955 06:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.265 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:08.265 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:08.265 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:08.265 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.525 [2024-07-23 06:23:20.814650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.525 [2024-07-23 06:23:20.814704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.525 [2024-07-23 06:23:20.814715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378806434f00 00:10:08.525 [2024-07-23 06:23:20.814724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.525 [2024-07-23 06:23:20.814847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.525 [2024-07-23 06:23:20.814859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.525 [2024-07-23 06:23:20.814882] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.525 [2024-07-23 06:23:20.814891] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.525 [2024-07-23 06:23:20.814916] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x378806435180 00:10:08.525 [2024-07-23 06:23:20.814923] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:08.525 [2024-07-23 06:23:20.814947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378806497e20 00:10:08.525 [2024-07-23 06:23:20.815001] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378806435180 00:10:08.525 [2024-07-23 06:23:20.815006] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x378806435180 00:10:08.525 [2024-07-23 06:23:20.815028] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.525 pt2 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.525 06:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.784 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.784 "name": "raid_bdev1", 00:10:08.784 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:08.784 "strip_size_kb": 64, 00:10:08.784 "state": "online", 00:10:08.784 "raid_level": "concat", 00:10:08.784 "superblock": true, 00:10:08.784 "num_base_bdevs": 2, 00:10:08.784 "num_base_bdevs_discovered": 2, 00:10:08.784 "num_base_bdevs_operational": 2, 00:10:08.784 "base_bdevs_list": [ 00:10:08.784 { 00:10:08.784 "name": "pt1", 00:10:08.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.784 "is_configured": true, 00:10:08.784 "data_offset": 2048, 00:10:08.784 "data_size": 63488 00:10:08.784 }, 00:10:08.784 { 00:10:08.784 "name": "pt2", 00:10:08.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.784 "is_configured": true, 00:10:08.784 "data_offset": 2048, 00:10:08.784 "data_size": 63488 00:10:08.784 } 00:10:08.784 ] 00:10:08.784 }' 00:10:08.784 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.784 06:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:09.043 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:09.302 [2024-07-23 06:23:21.626753] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.302 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:09.302 "name": "raid_bdev1", 00:10:09.302 "aliases": [ 00:10:09.302 "0bca2514-48bc-11ef-a06c-59ddad71024c" 00:10:09.302 ], 00:10:09.302 "product_name": "Raid Volume", 00:10:09.302 "block_size": 512, 00:10:09.302 "num_blocks": 126976, 00:10:09.302 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:09.302 "assigned_rate_limits": { 00:10:09.302 "rw_ios_per_sec": 0, 00:10:09.302 "rw_mbytes_per_sec": 0, 00:10:09.302 "r_mbytes_per_sec": 0, 00:10:09.302 "w_mbytes_per_sec": 0 00:10:09.302 }, 00:10:09.302 "claimed": false, 00:10:09.302 "zoned": false, 00:10:09.302 "supported_io_types": { 00:10:09.302 "read": true, 00:10:09.302 "write": true, 00:10:09.302 "unmap": true, 00:10:09.302 "flush": true, 00:10:09.302 "reset": true, 00:10:09.302 "nvme_admin": false, 00:10:09.302 "nvme_io": false, 00:10:09.302 "nvme_io_md": false, 00:10:09.302 "write_zeroes": true, 00:10:09.302 "zcopy": false, 00:10:09.302 "get_zone_info": false, 00:10:09.302 "zone_management": false, 00:10:09.302 "zone_append": false, 00:10:09.302 "compare": false, 00:10:09.302 "compare_and_write": false, 00:10:09.302 "abort": false, 00:10:09.302 "seek_hole": false, 00:10:09.302 "seek_data": false, 00:10:09.302 "copy": false, 00:10:09.302 "nvme_iov_md": false 00:10:09.302 }, 00:10:09.302 "memory_domains": [ 00:10:09.302 { 00:10:09.302 "dma_device_id": "system", 00:10:09.302 "dma_device_type": 1 00:10:09.302 }, 00:10:09.302 { 00:10:09.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.302 "dma_device_type": 2 00:10:09.302 }, 00:10:09.302 { 00:10:09.302 "dma_device_id": "system", 00:10:09.302 "dma_device_type": 1 00:10:09.302 }, 00:10:09.302 { 00:10:09.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.302 "dma_device_type": 2 00:10:09.302 } 00:10:09.302 ], 00:10:09.302 "driver_specific": { 00:10:09.302 "raid": { 00:10:09.302 "uuid": "0bca2514-48bc-11ef-a06c-59ddad71024c", 00:10:09.302 "strip_size_kb": 64, 00:10:09.302 "state": "online", 00:10:09.302 "raid_level": "concat", 00:10:09.302 "superblock": true, 00:10:09.302 "num_base_bdevs": 2, 00:10:09.302 "num_base_bdevs_discovered": 2, 00:10:09.302 "num_base_bdevs_operational": 2, 00:10:09.302 "base_bdevs_list": [ 00:10:09.302 { 00:10:09.302 "name": "pt1", 00:10:09.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.302 "is_configured": true, 00:10:09.302 "data_offset": 2048, 00:10:09.302 "data_size": 63488 00:10:09.302 }, 00:10:09.302 { 00:10:09.302 "name": "pt2", 00:10:09.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.302 "is_configured": true, 00:10:09.302 "data_offset": 2048, 00:10:09.302 "data_size": 63488 00:10:09.302 } 00:10:09.302 ] 00:10:09.302 } 00:10:09.302 } 00:10:09.302 }' 00:10:09.302 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.302 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:09.302 pt2' 00:10:09.302 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:09.302 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:09.302 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:09.561 "name": "pt1", 00:10:09.561 "aliases": [ 00:10:09.561 "00000000-0000-0000-0000-000000000001" 00:10:09.561 ], 00:10:09.561 "product_name": "passthru", 00:10:09.561 "block_size": 512, 00:10:09.561 "num_blocks": 65536, 00:10:09.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.561 "assigned_rate_limits": { 00:10:09.561 "rw_ios_per_sec": 0, 00:10:09.561 "rw_mbytes_per_sec": 0, 00:10:09.561 "r_mbytes_per_sec": 0, 00:10:09.561 "w_mbytes_per_sec": 0 00:10:09.561 }, 00:10:09.561 "claimed": true, 00:10:09.561 "claim_type": "exclusive_write", 00:10:09.561 "zoned": false, 00:10:09.561 "supported_io_types": { 00:10:09.561 "read": true, 00:10:09.561 "write": true, 00:10:09.561 "unmap": true, 00:10:09.561 "flush": true, 00:10:09.561 "reset": true, 00:10:09.561 "nvme_admin": false, 00:10:09.561 "nvme_io": false, 00:10:09.561 "nvme_io_md": false, 00:10:09.561 "write_zeroes": true, 00:10:09.561 "zcopy": true, 00:10:09.561 "get_zone_info": false, 00:10:09.561 "zone_management": false, 00:10:09.561 "zone_append": false, 00:10:09.561 "compare": false, 00:10:09.561 "compare_and_write": false, 00:10:09.561 "abort": true, 00:10:09.561 "seek_hole": false, 00:10:09.561 "seek_data": false, 00:10:09.561 "copy": true, 00:10:09.561 "nvme_iov_md": false 00:10:09.561 }, 00:10:09.561 "memory_domains": [ 00:10:09.561 { 00:10:09.561 "dma_device_id": "system", 00:10:09.561 "dma_device_type": 1 00:10:09.561 }, 00:10:09.561 { 00:10:09.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.561 "dma_device_type": 2 00:10:09.561 } 00:10:09.561 ], 00:10:09.561 "driver_specific": { 00:10:09.561 "passthru": { 00:10:09.561 "name": "pt1", 00:10:09.561 "base_bdev_name": "malloc1" 00:10:09.561 } 00:10:09.561 } 00:10:09.561 }' 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:09.561 06:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:09.561 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:09.561 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:09.561 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:09.561 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:09.820 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:09.820 "name": "pt2", 00:10:09.820 "aliases": [ 00:10:09.820 "00000000-0000-0000-0000-000000000002" 00:10:09.820 ], 00:10:09.820 "product_name": "passthru", 00:10:09.820 "block_size": 512, 00:10:09.820 "num_blocks": 65536, 00:10:09.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.820 "assigned_rate_limits": { 00:10:09.820 "rw_ios_per_sec": 0, 00:10:09.820 "rw_mbytes_per_sec": 0, 00:10:09.820 "r_mbytes_per_sec": 0, 00:10:09.820 "w_mbytes_per_sec": 0 00:10:09.820 }, 00:10:09.820 "claimed": true, 00:10:09.820 "claim_type": "exclusive_write", 00:10:09.820 "zoned": false, 00:10:09.820 "supported_io_types": { 00:10:09.820 "read": true, 00:10:09.820 "write": true, 00:10:09.820 "unmap": true, 00:10:09.820 "flush": true, 00:10:09.820 "reset": true, 00:10:09.820 "nvme_admin": false, 00:10:09.820 "nvme_io": false, 00:10:09.820 "nvme_io_md": false, 00:10:09.820 "write_zeroes": true, 00:10:09.820 "zcopy": true, 00:10:09.820 "get_zone_info": false, 00:10:09.820 "zone_management": false, 00:10:09.820 "zone_append": false, 00:10:09.820 "compare": false, 00:10:09.820 "compare_and_write": false, 00:10:09.820 "abort": true, 00:10:09.820 "seek_hole": false, 00:10:09.820 "seek_data": false, 00:10:09.820 "copy": true, 00:10:09.820 "nvme_iov_md": false 00:10:09.820 }, 00:10:09.820 "memory_domains": [ 00:10:09.820 { 00:10:09.820 "dma_device_id": "system", 00:10:09.820 "dma_device_type": 1 00:10:09.820 }, 00:10:09.820 { 00:10:09.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.820 "dma_device_type": 2 00:10:09.820 } 00:10:09.820 ], 00:10:09.820 "driver_specific": { 00:10:09.821 "passthru": { 00:10:09.821 "name": "pt2", 00:10:09.821 "base_bdev_name": "malloc2" 00:10:09.821 } 00:10:09.821 } 00:10:09.821 }' 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:09.821 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.080 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.080 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:10.080 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:10.339 [2024-07-23 06:23:22.610797] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0bca2514-48bc-11ef-a06c-59ddad71024c '!=' 0bca2514-48bc-11ef-a06c-59ddad71024c ']' 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50285 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50285 ']' 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50285 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50285 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:10.339 killing process with pid 50285 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50285' 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50285 00:10:10.339 [2024-07-23 06:23:22.644271] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.339 [2024-07-23 06:23:22.644294] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.339 [2024-07-23 06:23:22.644306] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.339 [2024-07-23 06:23:22.644310] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378806435180 name raid_bdev1, state offline 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50285 00:10:10.339 [2024-07-23 06:23:22.655820] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:10.339 00:10:10.339 real 0m9.137s 00:10:10.339 user 0m16.017s 00:10:10.339 sys 0m1.512s 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.339 06:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.339 ************************************ 00:10:10.339 END TEST raid_superblock_test 00:10:10.339 ************************************ 00:10:10.598 06:23:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:10.598 06:23:22 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:10.598 06:23:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:10.598 06:23:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.598 06:23:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.598 ************************************ 00:10:10.598 START TEST raid_read_error_test 00:10:10.598 ************************************ 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.kfpTTjhko8 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50554 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50554 /var/tmp/spdk-raid.sock 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50554 ']' 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.598 06:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.598 [2024-07-23 06:23:22.892668] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:10.598 [2024-07-23 06:23:22.892912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:11.165 EAL: TSC is not safe to use in SMP mode 00:10:11.165 EAL: TSC is not invariant 00:10:11.165 [2024-07-23 06:23:23.423824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.165 [2024-07-23 06:23:23.511486] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:11.166 [2024-07-23 06:23:23.513639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.166 [2024-07-23 06:23:23.514394] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.166 [2024-07-23 06:23:23.514411] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.731 06:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.731 06:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:11.731 06:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:11.731 06:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:11.994 BaseBdev1_malloc 00:10:11.994 06:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:12.261 true 00:10:12.261 06:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:12.520 [2024-07-23 06:23:24.850931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:12.520 [2024-07-23 06:23:24.851045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.520 [2024-07-23 06:23:24.851088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x178567a34780 00:10:12.520 [2024-07-23 06:23:24.851101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.520 [2024-07-23 06:23:24.852030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.520 [2024-07-23 06:23:24.852077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.520 BaseBdev1 00:10:12.520 06:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:12.520 06:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:12.778 BaseBdev2_malloc 00:10:12.778 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:13.037 true 00:10:13.037 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.295 [2024-07-23 06:23:25.618981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.295 [2024-07-23 06:23:25.619065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.295 [2024-07-23 06:23:25.619104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x178567a34c80 00:10:13.295 [2024-07-23 06:23:25.619114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.295 [2024-07-23 06:23:25.620045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.295 [2024-07-23 06:23:25.620077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.295 BaseBdev2 00:10:13.295 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:13.554 [2024-07-23 06:23:25.891023] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.554 [2024-07-23 06:23:25.891783] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.554 [2024-07-23 06:23:25.891879] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x178567a34f00 00:10:13.554 [2024-07-23 06:23:25.891889] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:13.554 [2024-07-23 06:23:25.891930] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x178567aa0e20 00:10:13.554 [2024-07-23 06:23:25.892023] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x178567a34f00 00:10:13.554 [2024-07-23 06:23:25.892030] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x178567a34f00 00:10:13.554 [2024-07-23 06:23:25.892092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.554 06:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.813 06:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:13.813 "name": "raid_bdev1", 00:10:13.813 "uuid": "11a5e176-48bc-11ef-a06c-59ddad71024c", 00:10:13.813 "strip_size_kb": 64, 00:10:13.813 "state": "online", 00:10:13.813 "raid_level": "concat", 00:10:13.813 "superblock": true, 00:10:13.813 "num_base_bdevs": 2, 00:10:13.813 "num_base_bdevs_discovered": 2, 00:10:13.813 "num_base_bdevs_operational": 2, 00:10:13.813 "base_bdevs_list": [ 00:10:13.813 { 00:10:13.813 "name": "BaseBdev1", 00:10:13.813 "uuid": "4365eef6-2d66-6e5d-b47d-32ba908a69f6", 00:10:13.813 "is_configured": true, 00:10:13.813 "data_offset": 2048, 00:10:13.813 "data_size": 63488 00:10:13.813 }, 00:10:13.813 { 00:10:13.813 "name": "BaseBdev2", 00:10:13.813 "uuid": "ce9baf6c-4661-b052-b68e-6f3610f49553", 00:10:13.813 "is_configured": true, 00:10:13.813 "data_offset": 2048, 00:10:13.813 "data_size": 63488 00:10:13.813 } 00:10:13.813 ] 00:10:13.813 }' 00:10:13.813 06:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:13.813 06:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.074 06:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:14.074 06:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:14.074 [2024-07-23 06:23:26.551307] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x178567aa0ec0 00:10:15.012 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.579 06:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.579 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.579 "name": "raid_bdev1", 00:10:15.579 "uuid": "11a5e176-48bc-11ef-a06c-59ddad71024c", 00:10:15.579 "strip_size_kb": 64, 00:10:15.579 "state": "online", 00:10:15.579 "raid_level": "concat", 00:10:15.579 "superblock": true, 00:10:15.579 "num_base_bdevs": 2, 00:10:15.579 "num_base_bdevs_discovered": 2, 00:10:15.579 "num_base_bdevs_operational": 2, 00:10:15.579 "base_bdevs_list": [ 00:10:15.579 { 00:10:15.579 "name": "BaseBdev1", 00:10:15.579 "uuid": "4365eef6-2d66-6e5d-b47d-32ba908a69f6", 00:10:15.579 "is_configured": true, 00:10:15.579 "data_offset": 2048, 00:10:15.579 "data_size": 63488 00:10:15.579 }, 00:10:15.579 { 00:10:15.579 "name": "BaseBdev2", 00:10:15.579 "uuid": "ce9baf6c-4661-b052-b68e-6f3610f49553", 00:10:15.579 "is_configured": true, 00:10:15.579 "data_offset": 2048, 00:10:15.579 "data_size": 63488 00:10:15.579 } 00:10:15.579 ] 00:10:15.579 }' 00:10:15.579 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.579 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:16.146 [2024-07-23 06:23:28.634767] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.146 [2024-07-23 06:23:28.634813] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.146 [2024-07-23 06:23:28.635212] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.146 [2024-07-23 06:23:28.635224] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.146 [2024-07-23 06:23:28.635232] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.146 [2024-07-23 06:23:28.635237] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x178567a34f00 name raid_bdev1, state offline 00:10:16.146 0 00:10:16.146 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50554 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50554 ']' 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50554 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50554 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:16.147 killing process with pid 50554 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50554' 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50554 00:10:16.147 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50554 00:10:16.147 [2024-07-23 06:23:28.662660] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.405 [2024-07-23 06:23:28.679056] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.kfpTTjhko8 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:10:16.665 00:10:16.665 real 0m6.054s 00:10:16.665 user 0m9.272s 00:10:16.665 sys 0m1.047s 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.665 ************************************ 00:10:16.665 06:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.665 END TEST raid_read_error_test 00:10:16.665 ************************************ 00:10:16.665 06:23:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:16.665 06:23:28 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:16.665 06:23:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:16.665 06:23:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.665 06:23:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.665 ************************************ 00:10:16.665 START TEST raid_write_error_test 00:10:16.665 ************************************ 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jH862QaQv5 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50682 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50682 /var/tmp/spdk-raid.sock 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50682 ']' 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.665 06:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.665 [2024-07-23 06:23:28.996732] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:16.665 [2024-07-23 06:23:28.996931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:17.233 EAL: TSC is not safe to use in SMP mode 00:10:17.233 EAL: TSC is not invariant 00:10:17.233 [2024-07-23 06:23:29.546081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.233 [2024-07-23 06:23:29.674419] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:17.233 [2024-07-23 06:23:29.677195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.233 [2024-07-23 06:23:29.678465] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.233 [2024-07-23 06:23:29.678492] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.800 06:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.800 06:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:17.800 06:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:17.800 06:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:17.800 BaseBdev1_malloc 00:10:18.059 06:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:18.317 true 00:10:18.317 06:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:18.317 [2024-07-23 06:23:30.822775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:18.317 [2024-07-23 06:23:30.822842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.317 [2024-07-23 06:23:30.822871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10786d234780 00:10:18.317 [2024-07-23 06:23:30.822880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.317 [2024-07-23 06:23:30.823586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.317 [2024-07-23 06:23:30.823616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:18.317 BaseBdev1 00:10:18.576 06:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:18.576 06:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:18.576 BaseBdev2_malloc 00:10:18.576 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:18.835 true 00:10:18.835 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.094 [2024-07-23 06:23:31.590815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.094 [2024-07-23 06:23:31.590872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.094 [2024-07-23 06:23:31.590901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10786d234c80 00:10:19.094 [2024-07-23 06:23:31.590910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.094 [2024-07-23 06:23:31.591596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.094 [2024-07-23 06:23:31.591626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.094 BaseBdev2 00:10:19.094 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:19.692 [2024-07-23 06:23:31.874842] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.692 [2024-07-23 06:23:31.875439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.692 [2024-07-23 06:23:31.875506] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10786d234f00 00:10:19.692 [2024-07-23 06:23:31.875513] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:19.692 [2024-07-23 06:23:31.875548] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10786d2a0e20 00:10:19.692 [2024-07-23 06:23:31.875632] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10786d234f00 00:10:19.692 [2024-07-23 06:23:31.875641] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10786d234f00 00:10:19.692 [2024-07-23 06:23:31.875672] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.692 06:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.692 06:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:19.692 "name": "raid_bdev1", 00:10:19.692 "uuid": "1536f0a2-48bc-11ef-a06c-59ddad71024c", 00:10:19.692 "strip_size_kb": 64, 00:10:19.692 "state": "online", 00:10:19.692 "raid_level": "concat", 00:10:19.692 "superblock": true, 00:10:19.692 "num_base_bdevs": 2, 00:10:19.692 "num_base_bdevs_discovered": 2, 00:10:19.692 "num_base_bdevs_operational": 2, 00:10:19.692 "base_bdevs_list": [ 00:10:19.692 { 00:10:19.692 "name": "BaseBdev1", 00:10:19.692 "uuid": "bf5971b2-b736-d25d-a0b8-00ca28aded85", 00:10:19.692 "is_configured": true, 00:10:19.692 "data_offset": 2048, 00:10:19.692 "data_size": 63488 00:10:19.692 }, 00:10:19.692 { 00:10:19.692 "name": "BaseBdev2", 00:10:19.692 "uuid": "1d6d3640-b7a0-d051-8ef7-17ea29668783", 00:10:19.692 "is_configured": true, 00:10:19.692 "data_offset": 2048, 00:10:19.692 "data_size": 63488 00:10:19.692 } 00:10:19.692 ] 00:10:19.692 }' 00:10:19.692 06:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:19.692 06:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.951 06:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:19.951 06:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:20.210 [2024-07-23 06:23:32.543048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10786d2a0ec0 00:10:21.145 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.403 06:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.661 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.661 "name": "raid_bdev1", 00:10:21.661 "uuid": "1536f0a2-48bc-11ef-a06c-59ddad71024c", 00:10:21.661 "strip_size_kb": 64, 00:10:21.661 "state": "online", 00:10:21.661 "raid_level": "concat", 00:10:21.661 "superblock": true, 00:10:21.661 "num_base_bdevs": 2, 00:10:21.661 "num_base_bdevs_discovered": 2, 00:10:21.661 "num_base_bdevs_operational": 2, 00:10:21.661 "base_bdevs_list": [ 00:10:21.661 { 00:10:21.661 "name": "BaseBdev1", 00:10:21.661 "uuid": "bf5971b2-b736-d25d-a0b8-00ca28aded85", 00:10:21.661 "is_configured": true, 00:10:21.661 "data_offset": 2048, 00:10:21.661 "data_size": 63488 00:10:21.661 }, 00:10:21.661 { 00:10:21.661 "name": "BaseBdev2", 00:10:21.661 "uuid": "1d6d3640-b7a0-d051-8ef7-17ea29668783", 00:10:21.661 "is_configured": true, 00:10:21.661 "data_offset": 2048, 00:10:21.661 "data_size": 63488 00:10:21.661 } 00:10:21.661 ] 00:10:21.661 }' 00:10:21.661 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.661 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.920 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:22.179 [2024-07-23 06:23:34.608765] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.179 [2024-07-23 06:23:34.608792] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.179 [2024-07-23 06:23:34.609150] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.179 [2024-07-23 06:23:34.609160] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.179 [2024-07-23 06:23:34.609169] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.179 [2024-07-23 06:23:34.609175] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10786d234f00 name raid_bdev1, state offline 00:10:22.179 0 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50682 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50682 ']' 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50682 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50682 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:22.179 killing process with pid 50682 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50682' 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50682 00:10:22.179 [2024-07-23 06:23:34.636901] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.179 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50682 00:10:22.179 [2024-07-23 06:23:34.648638] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.506 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jH862QaQv5 00:10:22.506 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:22.506 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:22.506 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:10:22.506 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:22.507 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:22.507 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:22.507 06:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:10:22.507 00:10:22.507 real 0m5.857s 00:10:22.507 user 0m9.068s 00:10:22.507 sys 0m0.931s 00:10:22.507 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.507 ************************************ 00:10:22.507 END TEST raid_write_error_test 00:10:22.507 ************************************ 00:10:22.507 06:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.507 06:23:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:22.507 06:23:34 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:22.507 06:23:34 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:22.507 06:23:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:22.507 06:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.507 06:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.507 ************************************ 00:10:22.507 START TEST raid_state_function_test 00:10:22.507 ************************************ 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50804 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50804' 00:10:22.507 Process raid pid: 50804 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50804 /var/tmp/spdk-raid.sock 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50804 ']' 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.507 06:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.507 [2024-07-23 06:23:34.898154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:22.507 [2024-07-23 06:23:34.898423] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:23.074 EAL: TSC is not safe to use in SMP mode 00:10:23.074 EAL: TSC is not invariant 00:10:23.074 [2024-07-23 06:23:35.441114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.074 [2024-07-23 06:23:35.527655] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:23.074 [2024-07-23 06:23:35.529847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.074 [2024-07-23 06:23:35.530629] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.074 [2024-07-23 06:23:35.530646] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.639 06:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.639 06:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:23.639 06:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:23.639 [2024-07-23 06:23:36.150957] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.639 [2024-07-23 06:23:36.151018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.639 [2024-07-23 06:23:36.151023] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.639 [2024-07-23 06:23:36.151032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.897 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.156 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.156 "name": "Existed_Raid", 00:10:24.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.156 "strip_size_kb": 0, 00:10:24.156 "state": "configuring", 00:10:24.156 "raid_level": "raid1", 00:10:24.156 "superblock": false, 00:10:24.156 "num_base_bdevs": 2, 00:10:24.156 "num_base_bdevs_discovered": 0, 00:10:24.156 "num_base_bdevs_operational": 2, 00:10:24.156 "base_bdevs_list": [ 00:10:24.156 { 00:10:24.156 "name": "BaseBdev1", 00:10:24.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.156 "is_configured": false, 00:10:24.156 "data_offset": 0, 00:10:24.156 "data_size": 0 00:10:24.156 }, 00:10:24.156 { 00:10:24.156 "name": "BaseBdev2", 00:10:24.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.156 "is_configured": false, 00:10:24.156 "data_offset": 0, 00:10:24.156 "data_size": 0 00:10:24.156 } 00:10:24.156 ] 00:10:24.156 }' 00:10:24.156 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.156 06:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.443 06:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:24.701 [2024-07-23 06:23:37.018990] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.701 [2024-07-23 06:23:37.019019] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248d21c34500 name Existed_Raid, state configuring 00:10:24.701 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:24.960 [2024-07-23 06:23:37.259006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.960 [2024-07-23 06:23:37.259075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.960 [2024-07-23 06:23:37.259081] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.960 [2024-07-23 06:23:37.259090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.960 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.219 [2024-07-23 06:23:37.492087] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.219 BaseBdev1 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:25.219 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.477 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.477 [ 00:10:25.477 { 00:10:25.477 "name": "BaseBdev1", 00:10:25.478 "aliases": [ 00:10:25.478 "188fe764-48bc-11ef-a06c-59ddad71024c" 00:10:25.478 ], 00:10:25.478 "product_name": "Malloc disk", 00:10:25.478 "block_size": 512, 00:10:25.478 "num_blocks": 65536, 00:10:25.478 "uuid": "188fe764-48bc-11ef-a06c-59ddad71024c", 00:10:25.478 "assigned_rate_limits": { 00:10:25.478 "rw_ios_per_sec": 0, 00:10:25.478 "rw_mbytes_per_sec": 0, 00:10:25.478 "r_mbytes_per_sec": 0, 00:10:25.478 "w_mbytes_per_sec": 0 00:10:25.478 }, 00:10:25.478 "claimed": true, 00:10:25.478 "claim_type": "exclusive_write", 00:10:25.478 "zoned": false, 00:10:25.478 "supported_io_types": { 00:10:25.478 "read": true, 00:10:25.478 "write": true, 00:10:25.478 "unmap": true, 00:10:25.478 "flush": true, 00:10:25.478 "reset": true, 00:10:25.478 "nvme_admin": false, 00:10:25.478 "nvme_io": false, 00:10:25.478 "nvme_io_md": false, 00:10:25.478 "write_zeroes": true, 00:10:25.478 "zcopy": true, 00:10:25.478 "get_zone_info": false, 00:10:25.478 "zone_management": false, 00:10:25.478 "zone_append": false, 00:10:25.478 "compare": false, 00:10:25.478 "compare_and_write": false, 00:10:25.478 "abort": true, 00:10:25.478 "seek_hole": false, 00:10:25.478 "seek_data": false, 00:10:25.478 "copy": true, 00:10:25.478 "nvme_iov_md": false 00:10:25.478 }, 00:10:25.478 "memory_domains": [ 00:10:25.478 { 00:10:25.478 "dma_device_id": "system", 00:10:25.478 "dma_device_type": 1 00:10:25.478 }, 00:10:25.478 { 00:10:25.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.478 "dma_device_type": 2 00:10:25.478 } 00:10:25.478 ], 00:10:25.478 "driver_specific": {} 00:10:25.478 } 00:10:25.478 ] 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.478 06:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.736 06:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.736 "name": "Existed_Raid", 00:10:25.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.736 "strip_size_kb": 0, 00:10:25.736 "state": "configuring", 00:10:25.736 "raid_level": "raid1", 00:10:25.736 "superblock": false, 00:10:25.736 "num_base_bdevs": 2, 00:10:25.736 "num_base_bdevs_discovered": 1, 00:10:25.736 "num_base_bdevs_operational": 2, 00:10:25.736 "base_bdevs_list": [ 00:10:25.736 { 00:10:25.736 "name": "BaseBdev1", 00:10:25.736 "uuid": "188fe764-48bc-11ef-a06c-59ddad71024c", 00:10:25.736 "is_configured": true, 00:10:25.736 "data_offset": 0, 00:10:25.736 "data_size": 65536 00:10:25.736 }, 00:10:25.736 { 00:10:25.736 "name": "BaseBdev2", 00:10:25.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.736 "is_configured": false, 00:10:25.736 "data_offset": 0, 00:10:25.736 "data_size": 0 00:10:25.736 } 00:10:25.736 ] 00:10:25.736 }' 00:10:25.736 06:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.736 06:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.304 06:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:26.304 [2024-07-23 06:23:38.795069] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.304 [2024-07-23 06:23:38.795102] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248d21c34500 name Existed_Raid, state configuring 00:10:26.304 06:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:26.562 [2024-07-23 06:23:39.079096] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.562 [2024-07-23 06:23:39.079963] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.562 [2024-07-23 06:23:39.080004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.821 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.080 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.080 "name": "Existed_Raid", 00:10:27.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.080 "strip_size_kb": 0, 00:10:27.080 "state": "configuring", 00:10:27.080 "raid_level": "raid1", 00:10:27.080 "superblock": false, 00:10:27.080 "num_base_bdevs": 2, 00:10:27.080 "num_base_bdevs_discovered": 1, 00:10:27.080 "num_base_bdevs_operational": 2, 00:10:27.080 "base_bdevs_list": [ 00:10:27.080 { 00:10:27.080 "name": "BaseBdev1", 00:10:27.080 "uuid": "188fe764-48bc-11ef-a06c-59ddad71024c", 00:10:27.080 "is_configured": true, 00:10:27.080 "data_offset": 0, 00:10:27.080 "data_size": 65536 00:10:27.080 }, 00:10:27.080 { 00:10:27.080 "name": "BaseBdev2", 00:10:27.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.080 "is_configured": false, 00:10:27.080 "data_offset": 0, 00:10:27.080 "data_size": 0 00:10:27.080 } 00:10:27.080 ] 00:10:27.080 }' 00:10:27.080 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.080 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.339 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.598 [2024-07-23 06:23:39.943294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.598 [2024-07-23 06:23:39.943322] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x248d21c34a00 00:10:27.598 [2024-07-23 06:23:39.943344] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:27.598 [2024-07-23 06:23:39.943383] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x248d21c97e20 00:10:27.598 [2024-07-23 06:23:39.943476] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x248d21c34a00 00:10:27.598 [2024-07-23 06:23:39.943481] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x248d21c34a00 00:10:27.598 [2024-07-23 06:23:39.943515] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.598 BaseBdev2 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:27.598 06:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:27.858 06:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.117 [ 00:10:28.117 { 00:10:28.117 "name": "BaseBdev2", 00:10:28.117 "aliases": [ 00:10:28.117 "1a061176-48bc-11ef-a06c-59ddad71024c" 00:10:28.117 ], 00:10:28.117 "product_name": "Malloc disk", 00:10:28.117 "block_size": 512, 00:10:28.117 "num_blocks": 65536, 00:10:28.117 "uuid": "1a061176-48bc-11ef-a06c-59ddad71024c", 00:10:28.117 "assigned_rate_limits": { 00:10:28.117 "rw_ios_per_sec": 0, 00:10:28.117 "rw_mbytes_per_sec": 0, 00:10:28.117 "r_mbytes_per_sec": 0, 00:10:28.117 "w_mbytes_per_sec": 0 00:10:28.117 }, 00:10:28.117 "claimed": true, 00:10:28.117 "claim_type": "exclusive_write", 00:10:28.117 "zoned": false, 00:10:28.117 "supported_io_types": { 00:10:28.117 "read": true, 00:10:28.117 "write": true, 00:10:28.117 "unmap": true, 00:10:28.117 "flush": true, 00:10:28.117 "reset": true, 00:10:28.117 "nvme_admin": false, 00:10:28.117 "nvme_io": false, 00:10:28.117 "nvme_io_md": false, 00:10:28.117 "write_zeroes": true, 00:10:28.117 "zcopy": true, 00:10:28.117 "get_zone_info": false, 00:10:28.117 "zone_management": false, 00:10:28.117 "zone_append": false, 00:10:28.117 "compare": false, 00:10:28.117 "compare_and_write": false, 00:10:28.117 "abort": true, 00:10:28.117 "seek_hole": false, 00:10:28.117 "seek_data": false, 00:10:28.117 "copy": true, 00:10:28.117 "nvme_iov_md": false 00:10:28.117 }, 00:10:28.117 "memory_domains": [ 00:10:28.117 { 00:10:28.117 "dma_device_id": "system", 00:10:28.117 "dma_device_type": 1 00:10:28.117 }, 00:10:28.117 { 00:10:28.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.117 "dma_device_type": 2 00:10:28.117 } 00:10:28.117 ], 00:10:28.117 "driver_specific": {} 00:10:28.117 } 00:10:28.117 ] 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.117 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.376 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.376 "name": "Existed_Raid", 00:10:28.376 "uuid": "1a061919-48bc-11ef-a06c-59ddad71024c", 00:10:28.376 "strip_size_kb": 0, 00:10:28.376 "state": "online", 00:10:28.376 "raid_level": "raid1", 00:10:28.376 "superblock": false, 00:10:28.376 "num_base_bdevs": 2, 00:10:28.376 "num_base_bdevs_discovered": 2, 00:10:28.376 "num_base_bdevs_operational": 2, 00:10:28.376 "base_bdevs_list": [ 00:10:28.376 { 00:10:28.376 "name": "BaseBdev1", 00:10:28.376 "uuid": "188fe764-48bc-11ef-a06c-59ddad71024c", 00:10:28.376 "is_configured": true, 00:10:28.376 "data_offset": 0, 00:10:28.376 "data_size": 65536 00:10:28.376 }, 00:10:28.376 { 00:10:28.376 "name": "BaseBdev2", 00:10:28.376 "uuid": "1a061176-48bc-11ef-a06c-59ddad71024c", 00:10:28.376 "is_configured": true, 00:10:28.376 "data_offset": 0, 00:10:28.376 "data_size": 65536 00:10:28.376 } 00:10:28.376 ] 00:10:28.376 }' 00:10:28.376 06:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.376 06:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:28.647 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:28.917 [2024-07-23 06:23:41.311243] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.917 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:28.917 "name": "Existed_Raid", 00:10:28.917 "aliases": [ 00:10:28.917 "1a061919-48bc-11ef-a06c-59ddad71024c" 00:10:28.917 ], 00:10:28.917 "product_name": "Raid Volume", 00:10:28.917 "block_size": 512, 00:10:28.917 "num_blocks": 65536, 00:10:28.917 "uuid": "1a061919-48bc-11ef-a06c-59ddad71024c", 00:10:28.917 "assigned_rate_limits": { 00:10:28.917 "rw_ios_per_sec": 0, 00:10:28.917 "rw_mbytes_per_sec": 0, 00:10:28.917 "r_mbytes_per_sec": 0, 00:10:28.917 "w_mbytes_per_sec": 0 00:10:28.917 }, 00:10:28.917 "claimed": false, 00:10:28.917 "zoned": false, 00:10:28.917 "supported_io_types": { 00:10:28.917 "read": true, 00:10:28.917 "write": true, 00:10:28.917 "unmap": false, 00:10:28.917 "flush": false, 00:10:28.917 "reset": true, 00:10:28.917 "nvme_admin": false, 00:10:28.917 "nvme_io": false, 00:10:28.917 "nvme_io_md": false, 00:10:28.917 "write_zeroes": true, 00:10:28.917 "zcopy": false, 00:10:28.917 "get_zone_info": false, 00:10:28.917 "zone_management": false, 00:10:28.917 "zone_append": false, 00:10:28.917 "compare": false, 00:10:28.917 "compare_and_write": false, 00:10:28.917 "abort": false, 00:10:28.917 "seek_hole": false, 00:10:28.917 "seek_data": false, 00:10:28.917 "copy": false, 00:10:28.917 "nvme_iov_md": false 00:10:28.917 }, 00:10:28.917 "memory_domains": [ 00:10:28.917 { 00:10:28.917 "dma_device_id": "system", 00:10:28.917 "dma_device_type": 1 00:10:28.917 }, 00:10:28.917 { 00:10:28.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.917 "dma_device_type": 2 00:10:28.917 }, 00:10:28.917 { 00:10:28.917 "dma_device_id": "system", 00:10:28.917 "dma_device_type": 1 00:10:28.917 }, 00:10:28.917 { 00:10:28.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.917 "dma_device_type": 2 00:10:28.917 } 00:10:28.917 ], 00:10:28.917 "driver_specific": { 00:10:28.917 "raid": { 00:10:28.917 "uuid": "1a061919-48bc-11ef-a06c-59ddad71024c", 00:10:28.917 "strip_size_kb": 0, 00:10:28.917 "state": "online", 00:10:28.917 "raid_level": "raid1", 00:10:28.917 "superblock": false, 00:10:28.917 "num_base_bdevs": 2, 00:10:28.917 "num_base_bdevs_discovered": 2, 00:10:28.917 "num_base_bdevs_operational": 2, 00:10:28.917 "base_bdevs_list": [ 00:10:28.917 { 00:10:28.917 "name": "BaseBdev1", 00:10:28.917 "uuid": "188fe764-48bc-11ef-a06c-59ddad71024c", 00:10:28.917 "is_configured": true, 00:10:28.917 "data_offset": 0, 00:10:28.917 "data_size": 65536 00:10:28.917 }, 00:10:28.917 { 00:10:28.917 "name": "BaseBdev2", 00:10:28.917 "uuid": "1a061176-48bc-11ef-a06c-59ddad71024c", 00:10:28.917 "is_configured": true, 00:10:28.917 "data_offset": 0, 00:10:28.917 "data_size": 65536 00:10:28.917 } 00:10:28.917 ] 00:10:28.917 } 00:10:28.917 } 00:10:28.917 }' 00:10:28.917 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.917 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:28.917 BaseBdev2' 00:10:28.917 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:28.917 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:28.917 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:29.176 "name": "BaseBdev1", 00:10:29.176 "aliases": [ 00:10:29.176 "188fe764-48bc-11ef-a06c-59ddad71024c" 00:10:29.176 ], 00:10:29.176 "product_name": "Malloc disk", 00:10:29.176 "block_size": 512, 00:10:29.176 "num_blocks": 65536, 00:10:29.176 "uuid": "188fe764-48bc-11ef-a06c-59ddad71024c", 00:10:29.176 "assigned_rate_limits": { 00:10:29.176 "rw_ios_per_sec": 0, 00:10:29.176 "rw_mbytes_per_sec": 0, 00:10:29.176 "r_mbytes_per_sec": 0, 00:10:29.176 "w_mbytes_per_sec": 0 00:10:29.176 }, 00:10:29.176 "claimed": true, 00:10:29.176 "claim_type": "exclusive_write", 00:10:29.176 "zoned": false, 00:10:29.176 "supported_io_types": { 00:10:29.176 "read": true, 00:10:29.176 "write": true, 00:10:29.176 "unmap": true, 00:10:29.176 "flush": true, 00:10:29.176 "reset": true, 00:10:29.176 "nvme_admin": false, 00:10:29.176 "nvme_io": false, 00:10:29.176 "nvme_io_md": false, 00:10:29.176 "write_zeroes": true, 00:10:29.176 "zcopy": true, 00:10:29.176 "get_zone_info": false, 00:10:29.176 "zone_management": false, 00:10:29.176 "zone_append": false, 00:10:29.176 "compare": false, 00:10:29.176 "compare_and_write": false, 00:10:29.176 "abort": true, 00:10:29.176 "seek_hole": false, 00:10:29.176 "seek_data": false, 00:10:29.176 "copy": true, 00:10:29.176 "nvme_iov_md": false 00:10:29.176 }, 00:10:29.176 "memory_domains": [ 00:10:29.176 { 00:10:29.176 "dma_device_id": "system", 00:10:29.176 "dma_device_type": 1 00:10:29.176 }, 00:10:29.176 { 00:10:29.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.176 "dma_device_type": 2 00:10:29.176 } 00:10:29.176 ], 00:10:29.176 "driver_specific": {} 00:10:29.176 }' 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:29.176 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:29.436 "name": "BaseBdev2", 00:10:29.436 "aliases": [ 00:10:29.436 "1a061176-48bc-11ef-a06c-59ddad71024c" 00:10:29.436 ], 00:10:29.436 "product_name": "Malloc disk", 00:10:29.436 "block_size": 512, 00:10:29.436 "num_blocks": 65536, 00:10:29.436 "uuid": "1a061176-48bc-11ef-a06c-59ddad71024c", 00:10:29.436 "assigned_rate_limits": { 00:10:29.436 "rw_ios_per_sec": 0, 00:10:29.436 "rw_mbytes_per_sec": 0, 00:10:29.436 "r_mbytes_per_sec": 0, 00:10:29.436 "w_mbytes_per_sec": 0 00:10:29.436 }, 00:10:29.436 "claimed": true, 00:10:29.436 "claim_type": "exclusive_write", 00:10:29.436 "zoned": false, 00:10:29.436 "supported_io_types": { 00:10:29.436 "read": true, 00:10:29.436 "write": true, 00:10:29.436 "unmap": true, 00:10:29.436 "flush": true, 00:10:29.436 "reset": true, 00:10:29.436 "nvme_admin": false, 00:10:29.436 "nvme_io": false, 00:10:29.436 "nvme_io_md": false, 00:10:29.436 "write_zeroes": true, 00:10:29.436 "zcopy": true, 00:10:29.436 "get_zone_info": false, 00:10:29.436 "zone_management": false, 00:10:29.436 "zone_append": false, 00:10:29.436 "compare": false, 00:10:29.436 "compare_and_write": false, 00:10:29.436 "abort": true, 00:10:29.436 "seek_hole": false, 00:10:29.436 "seek_data": false, 00:10:29.436 "copy": true, 00:10:29.436 "nvme_iov_md": false 00:10:29.436 }, 00:10:29.436 "memory_domains": [ 00:10:29.436 { 00:10:29.436 "dma_device_id": "system", 00:10:29.436 "dma_device_type": 1 00:10:29.436 }, 00:10:29.436 { 00:10:29.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.436 "dma_device_type": 2 00:10:29.436 } 00:10:29.436 ], 00:10:29.436 "driver_specific": {} 00:10:29.436 }' 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:29.436 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.696 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.696 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:29.696 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:29.696 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:29.696 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:29.696 06:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:29.696 [2024-07-23 06:23:42.211260] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.982 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.241 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.241 "name": "Existed_Raid", 00:10:30.241 "uuid": "1a061919-48bc-11ef-a06c-59ddad71024c", 00:10:30.241 "strip_size_kb": 0, 00:10:30.241 "state": "online", 00:10:30.241 "raid_level": "raid1", 00:10:30.241 "superblock": false, 00:10:30.241 "num_base_bdevs": 2, 00:10:30.241 "num_base_bdevs_discovered": 1, 00:10:30.241 "num_base_bdevs_operational": 1, 00:10:30.241 "base_bdevs_list": [ 00:10:30.241 { 00:10:30.241 "name": null, 00:10:30.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.241 "is_configured": false, 00:10:30.241 "data_offset": 0, 00:10:30.241 "data_size": 65536 00:10:30.241 }, 00:10:30.241 { 00:10:30.241 "name": "BaseBdev2", 00:10:30.241 "uuid": "1a061176-48bc-11ef-a06c-59ddad71024c", 00:10:30.241 "is_configured": true, 00:10:30.241 "data_offset": 0, 00:10:30.241 "data_size": 65536 00:10:30.241 } 00:10:30.241 ] 00:10:30.241 }' 00:10:30.241 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.241 06:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.499 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:30.499 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:30.499 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.499 06:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:30.758 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:30.758 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.758 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:31.016 [2024-07-23 06:23:43.385231] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.016 [2024-07-23 06:23:43.385286] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.016 [2024-07-23 06:23:43.391373] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.016 [2024-07-23 06:23:43.391393] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.016 [2024-07-23 06:23:43.391398] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248d21c34a00 name Existed_Raid, state offline 00:10:31.016 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:31.016 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:31.016 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.016 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50804 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50804 ']' 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50804 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50804 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:31.275 killing process with pid 50804 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50804' 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50804 00:10:31.275 [2024-07-23 06:23:43.701007] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.275 [2024-07-23 06:23:43.701049] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.275 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50804 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:31.534 00:10:31.534 real 0m8.997s 00:10:31.534 user 0m15.781s 00:10:31.534 sys 0m1.452s 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.534 ************************************ 00:10:31.534 END TEST raid_state_function_test 00:10:31.534 ************************************ 00:10:31.534 06:23:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:31.534 06:23:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:31.534 06:23:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:31.534 06:23:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.534 06:23:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.534 ************************************ 00:10:31.534 START TEST raid_state_function_test_sb 00:10:31.534 ************************************ 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51079 00:10:31.534 Process raid pid: 51079 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51079' 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51079 /var/tmp/spdk-raid.sock 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 51079 ']' 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.534 06:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.534 [2024-07-23 06:23:43.938001] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:31.534 [2024-07-23 06:23:43.938221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:32.101 EAL: TSC is not safe to use in SMP mode 00:10:32.101 EAL: TSC is not invariant 00:10:32.101 [2024-07-23 06:23:44.478496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.101 [2024-07-23 06:23:44.576037] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:32.102 [2024-07-23 06:23:44.578523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.102 [2024-07-23 06:23:44.579437] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.102 [2024-07-23 06:23:44.579454] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.669 06:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.669 06:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:32.669 06:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:32.928 [2024-07-23 06:23:45.224867] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.928 [2024-07-23 06:23:45.224952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.928 [2024-07-23 06:23:45.224957] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.928 [2024-07-23 06:23:45.224966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.928 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.187 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.187 "name": "Existed_Raid", 00:10:33.187 "uuid": "1d2bfe12-48bc-11ef-a06c-59ddad71024c", 00:10:33.187 "strip_size_kb": 0, 00:10:33.187 "state": "configuring", 00:10:33.187 "raid_level": "raid1", 00:10:33.187 "superblock": true, 00:10:33.187 "num_base_bdevs": 2, 00:10:33.187 "num_base_bdevs_discovered": 0, 00:10:33.187 "num_base_bdevs_operational": 2, 00:10:33.187 "base_bdevs_list": [ 00:10:33.187 { 00:10:33.187 "name": "BaseBdev1", 00:10:33.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.187 "is_configured": false, 00:10:33.187 "data_offset": 0, 00:10:33.187 "data_size": 0 00:10:33.187 }, 00:10:33.187 { 00:10:33.187 "name": "BaseBdev2", 00:10:33.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.187 "is_configured": false, 00:10:33.187 "data_offset": 0, 00:10:33.187 "data_size": 0 00:10:33.187 } 00:10:33.187 ] 00:10:33.187 }' 00:10:33.187 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.187 06:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.447 06:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:33.704 [2024-07-23 06:23:46.060997] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.704 [2024-07-23 06:23:46.061026] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21919834500 name Existed_Raid, state configuring 00:10:33.704 06:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:33.962 [2024-07-23 06:23:46.293009] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.962 [2024-07-23 06:23:46.293072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.962 [2024-07-23 06:23:46.293078] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.962 [2024-07-23 06:23:46.293103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.962 06:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.235 [2024-07-23 06:23:46.566057] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.235 BaseBdev1 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:34.235 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:34.494 06:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.764 [ 00:10:34.764 { 00:10:34.764 "name": "BaseBdev1", 00:10:34.764 "aliases": [ 00:10:34.764 "1df87c8f-48bc-11ef-a06c-59ddad71024c" 00:10:34.764 ], 00:10:34.764 "product_name": "Malloc disk", 00:10:34.764 "block_size": 512, 00:10:34.764 "num_blocks": 65536, 00:10:34.764 "uuid": "1df87c8f-48bc-11ef-a06c-59ddad71024c", 00:10:34.764 "assigned_rate_limits": { 00:10:34.764 "rw_ios_per_sec": 0, 00:10:34.764 "rw_mbytes_per_sec": 0, 00:10:34.764 "r_mbytes_per_sec": 0, 00:10:34.764 "w_mbytes_per_sec": 0 00:10:34.764 }, 00:10:34.764 "claimed": true, 00:10:34.764 "claim_type": "exclusive_write", 00:10:34.764 "zoned": false, 00:10:34.764 "supported_io_types": { 00:10:34.764 "read": true, 00:10:34.764 "write": true, 00:10:34.764 "unmap": true, 00:10:34.764 "flush": true, 00:10:34.764 "reset": true, 00:10:34.764 "nvme_admin": false, 00:10:34.764 "nvme_io": false, 00:10:34.764 "nvme_io_md": false, 00:10:34.764 "write_zeroes": true, 00:10:34.764 "zcopy": true, 00:10:34.764 "get_zone_info": false, 00:10:34.764 "zone_management": false, 00:10:34.764 "zone_append": false, 00:10:34.764 "compare": false, 00:10:34.764 "compare_and_write": false, 00:10:34.764 "abort": true, 00:10:34.764 "seek_hole": false, 00:10:34.764 "seek_data": false, 00:10:34.764 "copy": true, 00:10:34.764 "nvme_iov_md": false 00:10:34.764 }, 00:10:34.764 "memory_domains": [ 00:10:34.764 { 00:10:34.764 "dma_device_id": "system", 00:10:34.764 "dma_device_type": 1 00:10:34.764 }, 00:10:34.764 { 00:10:34.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.764 "dma_device_type": 2 00:10:34.764 } 00:10:34.764 ], 00:10:34.764 "driver_specific": {} 00:10:34.764 } 00:10:34.764 ] 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.764 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.023 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.023 "name": "Existed_Raid", 00:10:35.023 "uuid": "1dcefa80-48bc-11ef-a06c-59ddad71024c", 00:10:35.023 "strip_size_kb": 0, 00:10:35.023 "state": "configuring", 00:10:35.023 "raid_level": "raid1", 00:10:35.023 "superblock": true, 00:10:35.023 "num_base_bdevs": 2, 00:10:35.023 "num_base_bdevs_discovered": 1, 00:10:35.023 "num_base_bdevs_operational": 2, 00:10:35.023 "base_bdevs_list": [ 00:10:35.023 { 00:10:35.023 "name": "BaseBdev1", 00:10:35.023 "uuid": "1df87c8f-48bc-11ef-a06c-59ddad71024c", 00:10:35.023 "is_configured": true, 00:10:35.023 "data_offset": 2048, 00:10:35.023 "data_size": 63488 00:10:35.023 }, 00:10:35.023 { 00:10:35.023 "name": "BaseBdev2", 00:10:35.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.023 "is_configured": false, 00:10:35.023 "data_offset": 0, 00:10:35.023 "data_size": 0 00:10:35.023 } 00:10:35.023 ] 00:10:35.023 }' 00:10:35.023 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.023 06:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:35.640 [2024-07-23 06:23:47.977088] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.640 [2024-07-23 06:23:47.977125] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21919834500 name Existed_Raid, state configuring 00:10:35.640 06:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:35.901 [2024-07-23 06:23:48.221113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.901 [2024-07-23 06:23:48.221937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.901 [2024-07-23 06:23:48.221977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.901 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.160 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.160 "name": "Existed_Raid", 00:10:36.160 "uuid": "1ef52ec9-48bc-11ef-a06c-59ddad71024c", 00:10:36.160 "strip_size_kb": 0, 00:10:36.160 "state": "configuring", 00:10:36.160 "raid_level": "raid1", 00:10:36.160 "superblock": true, 00:10:36.160 "num_base_bdevs": 2, 00:10:36.160 "num_base_bdevs_discovered": 1, 00:10:36.160 "num_base_bdevs_operational": 2, 00:10:36.160 "base_bdevs_list": [ 00:10:36.160 { 00:10:36.160 "name": "BaseBdev1", 00:10:36.160 "uuid": "1df87c8f-48bc-11ef-a06c-59ddad71024c", 00:10:36.160 "is_configured": true, 00:10:36.160 "data_offset": 2048, 00:10:36.160 "data_size": 63488 00:10:36.160 }, 00:10:36.160 { 00:10:36.160 "name": "BaseBdev2", 00:10:36.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.160 "is_configured": false, 00:10:36.160 "data_offset": 0, 00:10:36.160 "data_size": 0 00:10:36.160 } 00:10:36.160 ] 00:10:36.160 }' 00:10:36.160 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.160 06:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.439 06:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.697 [2024-07-23 06:23:49.125271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.697 [2024-07-23 06:23:49.125351] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x21919834a00 00:10:36.697 [2024-07-23 06:23:49.125358] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.697 [2024-07-23 06:23:49.125378] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21919897e20 00:10:36.697 [2024-07-23 06:23:49.125422] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21919834a00 00:10:36.697 [2024-07-23 06:23:49.125427] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21919834a00 00:10:36.697 [2024-07-23 06:23:49.125446] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.697 BaseBdev2 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:36.697 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:36.956 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.215 [ 00:10:37.215 { 00:10:37.215 "name": "BaseBdev2", 00:10:37.215 "aliases": [ 00:10:37.215 "1f7f2117-48bc-11ef-a06c-59ddad71024c" 00:10:37.215 ], 00:10:37.215 "product_name": "Malloc disk", 00:10:37.215 "block_size": 512, 00:10:37.215 "num_blocks": 65536, 00:10:37.215 "uuid": "1f7f2117-48bc-11ef-a06c-59ddad71024c", 00:10:37.215 "assigned_rate_limits": { 00:10:37.215 "rw_ios_per_sec": 0, 00:10:37.215 "rw_mbytes_per_sec": 0, 00:10:37.215 "r_mbytes_per_sec": 0, 00:10:37.215 "w_mbytes_per_sec": 0 00:10:37.215 }, 00:10:37.215 "claimed": true, 00:10:37.215 "claim_type": "exclusive_write", 00:10:37.215 "zoned": false, 00:10:37.215 "supported_io_types": { 00:10:37.215 "read": true, 00:10:37.215 "write": true, 00:10:37.215 "unmap": true, 00:10:37.215 "flush": true, 00:10:37.215 "reset": true, 00:10:37.215 "nvme_admin": false, 00:10:37.215 "nvme_io": false, 00:10:37.215 "nvme_io_md": false, 00:10:37.215 "write_zeroes": true, 00:10:37.215 "zcopy": true, 00:10:37.215 "get_zone_info": false, 00:10:37.215 "zone_management": false, 00:10:37.215 "zone_append": false, 00:10:37.215 "compare": false, 00:10:37.215 "compare_and_write": false, 00:10:37.215 "abort": true, 00:10:37.215 "seek_hole": false, 00:10:37.215 "seek_data": false, 00:10:37.215 "copy": true, 00:10:37.215 "nvme_iov_md": false 00:10:37.215 }, 00:10:37.215 "memory_domains": [ 00:10:37.215 { 00:10:37.215 "dma_device_id": "system", 00:10:37.215 "dma_device_type": 1 00:10:37.215 }, 00:10:37.215 { 00:10:37.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.215 "dma_device_type": 2 00:10:37.215 } 00:10:37.215 ], 00:10:37.215 "driver_specific": {} 00:10:37.215 } 00:10:37.215 ] 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.215 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.474 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.474 "name": "Existed_Raid", 00:10:37.474 "uuid": "1ef52ec9-48bc-11ef-a06c-59ddad71024c", 00:10:37.474 "strip_size_kb": 0, 00:10:37.474 "state": "online", 00:10:37.474 "raid_level": "raid1", 00:10:37.474 "superblock": true, 00:10:37.474 "num_base_bdevs": 2, 00:10:37.474 "num_base_bdevs_discovered": 2, 00:10:37.474 "num_base_bdevs_operational": 2, 00:10:37.474 "base_bdevs_list": [ 00:10:37.474 { 00:10:37.474 "name": "BaseBdev1", 00:10:37.474 "uuid": "1df87c8f-48bc-11ef-a06c-59ddad71024c", 00:10:37.474 "is_configured": true, 00:10:37.474 "data_offset": 2048, 00:10:37.474 "data_size": 63488 00:10:37.474 }, 00:10:37.474 { 00:10:37.474 "name": "BaseBdev2", 00:10:37.474 "uuid": "1f7f2117-48bc-11ef-a06c-59ddad71024c", 00:10:37.474 "is_configured": true, 00:10:37.474 "data_offset": 2048, 00:10:37.474 "data_size": 63488 00:10:37.474 } 00:10:37.474 ] 00:10:37.474 }' 00:10:37.474 06:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.474 06:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:38.041 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:38.041 [2024-07-23 06:23:50.545282] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.299 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:38.299 "name": "Existed_Raid", 00:10:38.299 "aliases": [ 00:10:38.299 "1ef52ec9-48bc-11ef-a06c-59ddad71024c" 00:10:38.299 ], 00:10:38.299 "product_name": "Raid Volume", 00:10:38.299 "block_size": 512, 00:10:38.299 "num_blocks": 63488, 00:10:38.299 "uuid": "1ef52ec9-48bc-11ef-a06c-59ddad71024c", 00:10:38.299 "assigned_rate_limits": { 00:10:38.300 "rw_ios_per_sec": 0, 00:10:38.300 "rw_mbytes_per_sec": 0, 00:10:38.300 "r_mbytes_per_sec": 0, 00:10:38.300 "w_mbytes_per_sec": 0 00:10:38.300 }, 00:10:38.300 "claimed": false, 00:10:38.300 "zoned": false, 00:10:38.300 "supported_io_types": { 00:10:38.300 "read": true, 00:10:38.300 "write": true, 00:10:38.300 "unmap": false, 00:10:38.300 "flush": false, 00:10:38.300 "reset": true, 00:10:38.300 "nvme_admin": false, 00:10:38.300 "nvme_io": false, 00:10:38.300 "nvme_io_md": false, 00:10:38.300 "write_zeroes": true, 00:10:38.300 "zcopy": false, 00:10:38.300 "get_zone_info": false, 00:10:38.300 "zone_management": false, 00:10:38.300 "zone_append": false, 00:10:38.300 "compare": false, 00:10:38.300 "compare_and_write": false, 00:10:38.300 "abort": false, 00:10:38.300 "seek_hole": false, 00:10:38.300 "seek_data": false, 00:10:38.300 "copy": false, 00:10:38.300 "nvme_iov_md": false 00:10:38.300 }, 00:10:38.300 "memory_domains": [ 00:10:38.300 { 00:10:38.300 "dma_device_id": "system", 00:10:38.300 "dma_device_type": 1 00:10:38.300 }, 00:10:38.300 { 00:10:38.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.300 "dma_device_type": 2 00:10:38.300 }, 00:10:38.300 { 00:10:38.300 "dma_device_id": "system", 00:10:38.300 "dma_device_type": 1 00:10:38.300 }, 00:10:38.300 { 00:10:38.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.300 "dma_device_type": 2 00:10:38.300 } 00:10:38.300 ], 00:10:38.300 "driver_specific": { 00:10:38.300 "raid": { 00:10:38.300 "uuid": "1ef52ec9-48bc-11ef-a06c-59ddad71024c", 00:10:38.300 "strip_size_kb": 0, 00:10:38.300 "state": "online", 00:10:38.300 "raid_level": "raid1", 00:10:38.300 "superblock": true, 00:10:38.300 "num_base_bdevs": 2, 00:10:38.300 "num_base_bdevs_discovered": 2, 00:10:38.300 "num_base_bdevs_operational": 2, 00:10:38.300 "base_bdevs_list": [ 00:10:38.300 { 00:10:38.300 "name": "BaseBdev1", 00:10:38.300 "uuid": "1df87c8f-48bc-11ef-a06c-59ddad71024c", 00:10:38.300 "is_configured": true, 00:10:38.300 "data_offset": 2048, 00:10:38.300 "data_size": 63488 00:10:38.300 }, 00:10:38.300 { 00:10:38.300 "name": "BaseBdev2", 00:10:38.300 "uuid": "1f7f2117-48bc-11ef-a06c-59ddad71024c", 00:10:38.300 "is_configured": true, 00:10:38.300 "data_offset": 2048, 00:10:38.300 "data_size": 63488 00:10:38.300 } 00:10:38.300 ] 00:10:38.300 } 00:10:38.300 } 00:10:38.300 }' 00:10:38.300 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.300 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:38.300 BaseBdev2' 00:10:38.300 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:38.300 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:38.300 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:38.558 "name": "BaseBdev1", 00:10:38.558 "aliases": [ 00:10:38.558 "1df87c8f-48bc-11ef-a06c-59ddad71024c" 00:10:38.558 ], 00:10:38.558 "product_name": "Malloc disk", 00:10:38.558 "block_size": 512, 00:10:38.558 "num_blocks": 65536, 00:10:38.558 "uuid": "1df87c8f-48bc-11ef-a06c-59ddad71024c", 00:10:38.558 "assigned_rate_limits": { 00:10:38.558 "rw_ios_per_sec": 0, 00:10:38.558 "rw_mbytes_per_sec": 0, 00:10:38.558 "r_mbytes_per_sec": 0, 00:10:38.558 "w_mbytes_per_sec": 0 00:10:38.558 }, 00:10:38.558 "claimed": true, 00:10:38.558 "claim_type": "exclusive_write", 00:10:38.558 "zoned": false, 00:10:38.558 "supported_io_types": { 00:10:38.558 "read": true, 00:10:38.558 "write": true, 00:10:38.558 "unmap": true, 00:10:38.558 "flush": true, 00:10:38.558 "reset": true, 00:10:38.558 "nvme_admin": false, 00:10:38.558 "nvme_io": false, 00:10:38.558 "nvme_io_md": false, 00:10:38.558 "write_zeroes": true, 00:10:38.558 "zcopy": true, 00:10:38.558 "get_zone_info": false, 00:10:38.558 "zone_management": false, 00:10:38.558 "zone_append": false, 00:10:38.558 "compare": false, 00:10:38.558 "compare_and_write": false, 00:10:38.558 "abort": true, 00:10:38.558 "seek_hole": false, 00:10:38.558 "seek_data": false, 00:10:38.558 "copy": true, 00:10:38.558 "nvme_iov_md": false 00:10:38.558 }, 00:10:38.558 "memory_domains": [ 00:10:38.558 { 00:10:38.558 "dma_device_id": "system", 00:10:38.558 "dma_device_type": 1 00:10:38.558 }, 00:10:38.558 { 00:10:38.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.558 "dma_device_type": 2 00:10:38.558 } 00:10:38.558 ], 00:10:38.558 "driver_specific": {} 00:10:38.558 }' 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.558 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:38.559 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:38.559 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:38.559 06:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:38.817 "name": "BaseBdev2", 00:10:38.817 "aliases": [ 00:10:38.817 "1f7f2117-48bc-11ef-a06c-59ddad71024c" 00:10:38.817 ], 00:10:38.817 "product_name": "Malloc disk", 00:10:38.817 "block_size": 512, 00:10:38.817 "num_blocks": 65536, 00:10:38.817 "uuid": "1f7f2117-48bc-11ef-a06c-59ddad71024c", 00:10:38.817 "assigned_rate_limits": { 00:10:38.817 "rw_ios_per_sec": 0, 00:10:38.817 "rw_mbytes_per_sec": 0, 00:10:38.817 "r_mbytes_per_sec": 0, 00:10:38.817 "w_mbytes_per_sec": 0 00:10:38.817 }, 00:10:38.817 "claimed": true, 00:10:38.817 "claim_type": "exclusive_write", 00:10:38.817 "zoned": false, 00:10:38.817 "supported_io_types": { 00:10:38.817 "read": true, 00:10:38.817 "write": true, 00:10:38.817 "unmap": true, 00:10:38.817 "flush": true, 00:10:38.817 "reset": true, 00:10:38.817 "nvme_admin": false, 00:10:38.817 "nvme_io": false, 00:10:38.817 "nvme_io_md": false, 00:10:38.817 "write_zeroes": true, 00:10:38.817 "zcopy": true, 00:10:38.817 "get_zone_info": false, 00:10:38.817 "zone_management": false, 00:10:38.817 "zone_append": false, 00:10:38.817 "compare": false, 00:10:38.817 "compare_and_write": false, 00:10:38.817 "abort": true, 00:10:38.817 "seek_hole": false, 00:10:38.817 "seek_data": false, 00:10:38.817 "copy": true, 00:10:38.817 "nvme_iov_md": false 00:10:38.817 }, 00:10:38.817 "memory_domains": [ 00:10:38.817 { 00:10:38.817 "dma_device_id": "system", 00:10:38.817 "dma_device_type": 1 00:10:38.817 }, 00:10:38.817 { 00:10:38.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.817 "dma_device_type": 2 00:10:38.817 } 00:10:38.817 ], 00:10:38.817 "driver_specific": {} 00:10:38.817 }' 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:38.817 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:39.076 [2024-07-23 06:23:51.513377] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.076 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.335 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.335 "name": "Existed_Raid", 00:10:39.335 "uuid": "1ef52ec9-48bc-11ef-a06c-59ddad71024c", 00:10:39.335 "strip_size_kb": 0, 00:10:39.335 "state": "online", 00:10:39.335 "raid_level": "raid1", 00:10:39.335 "superblock": true, 00:10:39.335 "num_base_bdevs": 2, 00:10:39.335 "num_base_bdevs_discovered": 1, 00:10:39.335 "num_base_bdevs_operational": 1, 00:10:39.335 "base_bdevs_list": [ 00:10:39.335 { 00:10:39.335 "name": null, 00:10:39.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.335 "is_configured": false, 00:10:39.335 "data_offset": 2048, 00:10:39.335 "data_size": 63488 00:10:39.335 }, 00:10:39.335 { 00:10:39.335 "name": "BaseBdev2", 00:10:39.335 "uuid": "1f7f2117-48bc-11ef-a06c-59ddad71024c", 00:10:39.335 "is_configured": true, 00:10:39.335 "data_offset": 2048, 00:10:39.335 "data_size": 63488 00:10:39.335 } 00:10:39.335 ] 00:10:39.335 }' 00:10:39.335 06:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.335 06:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.593 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:39.593 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:39.593 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.593 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:39.852 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:39.852 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.852 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:40.420 [2024-07-23 06:23:52.631456] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.420 [2024-07-23 06:23:52.631503] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.420 [2024-07-23 06:23:52.637552] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.420 [2024-07-23 06:23:52.637567] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.420 [2024-07-23 06:23:52.637572] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21919834a00 name Existed_Raid, state offline 00:10:40.420 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:40.420 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:40.420 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.420 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51079 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 51079 ']' 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 51079 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 51079 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:40.679 killing process with pid 51079 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51079' 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 51079 00:10:40.679 [2024-07-23 06:23:52.978040] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.679 [2024-07-23 06:23:52.978075] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.679 06:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 51079 00:10:40.679 06:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:40.679 00:10:40.679 real 0m9.226s 00:10:40.679 user 0m16.211s 00:10:40.679 sys 0m1.457s 00:10:40.679 06:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.679 06:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.679 ************************************ 00:10:40.679 END TEST raid_state_function_test_sb 00:10:40.679 ************************************ 00:10:40.679 06:23:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:40.680 06:23:53 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:40.680 06:23:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:40.680 06:23:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.680 06:23:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.680 ************************************ 00:10:40.680 START TEST raid_superblock_test 00:10:40.680 ************************************ 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:40.680 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51353 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51353 /var/tmp/spdk-raid.sock 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51353 ']' 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.939 06:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.939 [2024-07-23 06:23:53.205467] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:40.939 [2024-07-23 06:23:53.205637] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:41.507 EAL: TSC is not safe to use in SMP mode 00:10:41.507 EAL: TSC is not invariant 00:10:41.507 [2024-07-23 06:23:53.739932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.507 [2024-07-23 06:23:53.829991] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:41.507 [2024-07-23 06:23:53.832091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.507 [2024-07-23 06:23:53.832881] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.507 [2024-07-23 06:23:53.832895] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.765 06:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.765 06:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:41.765 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:41.765 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:41.765 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:41.765 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:41.766 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:41.766 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.766 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.766 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.766 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:42.023 malloc1 00:10:42.023 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.282 [2024-07-23 06:23:54.793269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.282 [2024-07-23 06:23:54.793327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.282 [2024-07-23 06:23:54.793340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9bb434780 00:10:42.282 [2024-07-23 06:23:54.793348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.282 [2024-07-23 06:23:54.794279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.282 [2024-07-23 06:23:54.794303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.282 pt1 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.540 06:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:42.540 malloc2 00:10:42.540 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:42.798 [2024-07-23 06:23:55.281279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:42.798 [2024-07-23 06:23:55.281354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.798 [2024-07-23 06:23:55.281367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9bb434c80 00:10:42.798 [2024-07-23 06:23:55.281375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.798 [2024-07-23 06:23:55.282152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.798 [2024-07-23 06:23:55.282194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:42.798 pt2 00:10:42.798 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:42.798 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:42.798 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:43.055 [2024-07-23 06:23:55.509294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.055 [2024-07-23 06:23:55.509945] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.055 [2024-07-23 06:23:55.510007] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9bb434f00 00:10:43.055 [2024-07-23 06:23:55.510014] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.055 [2024-07-23 06:23:55.510051] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9bb497e20 00:10:43.055 [2024-07-23 06:23:55.510169] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9bb434f00 00:10:43.055 [2024-07-23 06:23:55.510176] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe9bb434f00 00:10:43.055 [2024-07-23 06:23:55.510220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.055 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.313 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.313 "name": "raid_bdev1", 00:10:43.313 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:43.313 "strip_size_kb": 0, 00:10:43.313 "state": "online", 00:10:43.313 "raid_level": "raid1", 00:10:43.313 "superblock": true, 00:10:43.313 "num_base_bdevs": 2, 00:10:43.313 "num_base_bdevs_discovered": 2, 00:10:43.313 "num_base_bdevs_operational": 2, 00:10:43.313 "base_bdevs_list": [ 00:10:43.313 { 00:10:43.313 "name": "pt1", 00:10:43.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.313 "is_configured": true, 00:10:43.313 "data_offset": 2048, 00:10:43.313 "data_size": 63488 00:10:43.313 }, 00:10:43.313 { 00:10:43.313 "name": "pt2", 00:10:43.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.313 "is_configured": true, 00:10:43.313 "data_offset": 2048, 00:10:43.313 "data_size": 63488 00:10:43.313 } 00:10:43.313 ] 00:10:43.313 }' 00:10:43.313 06:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.313 06:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:43.572 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:43.831 [2024-07-23 06:23:56.289345] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.831 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:43.831 "name": "raid_bdev1", 00:10:43.831 "aliases": [ 00:10:43.831 "234d4568-48bc-11ef-a06c-59ddad71024c" 00:10:43.831 ], 00:10:43.831 "product_name": "Raid Volume", 00:10:43.831 "block_size": 512, 00:10:43.831 "num_blocks": 63488, 00:10:43.831 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:43.831 "assigned_rate_limits": { 00:10:43.831 "rw_ios_per_sec": 0, 00:10:43.831 "rw_mbytes_per_sec": 0, 00:10:43.831 "r_mbytes_per_sec": 0, 00:10:43.831 "w_mbytes_per_sec": 0 00:10:43.831 }, 00:10:43.831 "claimed": false, 00:10:43.831 "zoned": false, 00:10:43.831 "supported_io_types": { 00:10:43.831 "read": true, 00:10:43.831 "write": true, 00:10:43.831 "unmap": false, 00:10:43.831 "flush": false, 00:10:43.831 "reset": true, 00:10:43.831 "nvme_admin": false, 00:10:43.831 "nvme_io": false, 00:10:43.831 "nvme_io_md": false, 00:10:43.831 "write_zeroes": true, 00:10:43.831 "zcopy": false, 00:10:43.831 "get_zone_info": false, 00:10:43.831 "zone_management": false, 00:10:43.831 "zone_append": false, 00:10:43.831 "compare": false, 00:10:43.831 "compare_and_write": false, 00:10:43.831 "abort": false, 00:10:43.831 "seek_hole": false, 00:10:43.831 "seek_data": false, 00:10:43.831 "copy": false, 00:10:43.831 "nvme_iov_md": false 00:10:43.831 }, 00:10:43.831 "memory_domains": [ 00:10:43.831 { 00:10:43.831 "dma_device_id": "system", 00:10:43.831 "dma_device_type": 1 00:10:43.831 }, 00:10:43.831 { 00:10:43.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.831 "dma_device_type": 2 00:10:43.831 }, 00:10:43.831 { 00:10:43.831 "dma_device_id": "system", 00:10:43.831 "dma_device_type": 1 00:10:43.831 }, 00:10:43.831 { 00:10:43.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.831 "dma_device_type": 2 00:10:43.831 } 00:10:43.831 ], 00:10:43.831 "driver_specific": { 00:10:43.831 "raid": { 00:10:43.831 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:43.831 "strip_size_kb": 0, 00:10:43.831 "state": "online", 00:10:43.831 "raid_level": "raid1", 00:10:43.831 "superblock": true, 00:10:43.831 "num_base_bdevs": 2, 00:10:43.831 "num_base_bdevs_discovered": 2, 00:10:43.831 "num_base_bdevs_operational": 2, 00:10:43.831 "base_bdevs_list": [ 00:10:43.831 { 00:10:43.831 "name": "pt1", 00:10:43.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.831 "is_configured": true, 00:10:43.831 "data_offset": 2048, 00:10:43.831 "data_size": 63488 00:10:43.831 }, 00:10:43.831 { 00:10:43.831 "name": "pt2", 00:10:43.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.831 "is_configured": true, 00:10:43.831 "data_offset": 2048, 00:10:43.831 "data_size": 63488 00:10:43.831 } 00:10:43.831 ] 00:10:43.831 } 00:10:43.831 } 00:10:43.831 }' 00:10:43.831 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.831 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:43.831 pt2' 00:10:43.831 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:43.831 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:43.831 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:44.090 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:44.090 "name": "pt1", 00:10:44.090 "aliases": [ 00:10:44.090 "00000000-0000-0000-0000-000000000001" 00:10:44.090 ], 00:10:44.090 "product_name": "passthru", 00:10:44.090 "block_size": 512, 00:10:44.090 "num_blocks": 65536, 00:10:44.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.090 "assigned_rate_limits": { 00:10:44.090 "rw_ios_per_sec": 0, 00:10:44.090 "rw_mbytes_per_sec": 0, 00:10:44.090 "r_mbytes_per_sec": 0, 00:10:44.090 "w_mbytes_per_sec": 0 00:10:44.090 }, 00:10:44.090 "claimed": true, 00:10:44.090 "claim_type": "exclusive_write", 00:10:44.090 "zoned": false, 00:10:44.090 "supported_io_types": { 00:10:44.090 "read": true, 00:10:44.090 "write": true, 00:10:44.090 "unmap": true, 00:10:44.090 "flush": true, 00:10:44.090 "reset": true, 00:10:44.090 "nvme_admin": false, 00:10:44.090 "nvme_io": false, 00:10:44.090 "nvme_io_md": false, 00:10:44.090 "write_zeroes": true, 00:10:44.090 "zcopy": true, 00:10:44.090 "get_zone_info": false, 00:10:44.090 "zone_management": false, 00:10:44.090 "zone_append": false, 00:10:44.090 "compare": false, 00:10:44.090 "compare_and_write": false, 00:10:44.090 "abort": true, 00:10:44.090 "seek_hole": false, 00:10:44.090 "seek_data": false, 00:10:44.090 "copy": true, 00:10:44.090 "nvme_iov_md": false 00:10:44.090 }, 00:10:44.090 "memory_domains": [ 00:10:44.090 { 00:10:44.090 "dma_device_id": "system", 00:10:44.090 "dma_device_type": 1 00:10:44.090 }, 00:10:44.090 { 00:10:44.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.090 "dma_device_type": 2 00:10:44.090 } 00:10:44.090 ], 00:10:44.090 "driver_specific": { 00:10:44.090 "passthru": { 00:10:44.090 "name": "pt1", 00:10:44.090 "base_bdev_name": "malloc1" 00:10:44.090 } 00:10:44.090 } 00:10:44.090 }' 00:10:44.090 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.090 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:44.348 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:44.606 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:44.606 "name": "pt2", 00:10:44.606 "aliases": [ 00:10:44.606 "00000000-0000-0000-0000-000000000002" 00:10:44.606 ], 00:10:44.607 "product_name": "passthru", 00:10:44.607 "block_size": 512, 00:10:44.607 "num_blocks": 65536, 00:10:44.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.607 "assigned_rate_limits": { 00:10:44.607 "rw_ios_per_sec": 0, 00:10:44.607 "rw_mbytes_per_sec": 0, 00:10:44.607 "r_mbytes_per_sec": 0, 00:10:44.607 "w_mbytes_per_sec": 0 00:10:44.607 }, 00:10:44.607 "claimed": true, 00:10:44.607 "claim_type": "exclusive_write", 00:10:44.607 "zoned": false, 00:10:44.607 "supported_io_types": { 00:10:44.607 "read": true, 00:10:44.607 "write": true, 00:10:44.607 "unmap": true, 00:10:44.607 "flush": true, 00:10:44.607 "reset": true, 00:10:44.607 "nvme_admin": false, 00:10:44.607 "nvme_io": false, 00:10:44.607 "nvme_io_md": false, 00:10:44.607 "write_zeroes": true, 00:10:44.607 "zcopy": true, 00:10:44.607 "get_zone_info": false, 00:10:44.607 "zone_management": false, 00:10:44.607 "zone_append": false, 00:10:44.607 "compare": false, 00:10:44.607 "compare_and_write": false, 00:10:44.607 "abort": true, 00:10:44.607 "seek_hole": false, 00:10:44.607 "seek_data": false, 00:10:44.607 "copy": true, 00:10:44.607 "nvme_iov_md": false 00:10:44.607 }, 00:10:44.607 "memory_domains": [ 00:10:44.607 { 00:10:44.607 "dma_device_id": "system", 00:10:44.607 "dma_device_type": 1 00:10:44.607 }, 00:10:44.607 { 00:10:44.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.607 "dma_device_type": 2 00:10:44.607 } 00:10:44.607 ], 00:10:44.607 "driver_specific": { 00:10:44.607 "passthru": { 00:10:44.607 "name": "pt2", 00:10:44.607 "base_bdev_name": "malloc2" 00:10:44.607 } 00:10:44.607 } 00:10:44.607 }' 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:44.607 06:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:44.865 [2024-07-23 06:23:57.273464] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.865 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=234d4568-48bc-11ef-a06c-59ddad71024c 00:10:44.865 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 234d4568-48bc-11ef-a06c-59ddad71024c ']' 00:10:44.865 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:45.123 [2024-07-23 06:23:57.553414] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.123 [2024-07-23 06:23:57.553444] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.123 [2024-07-23 06:23:57.553477] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.123 [2024-07-23 06:23:57.553496] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.123 [2024-07-23 06:23:57.553502] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9bb434f00 name raid_bdev1, state offline 00:10:45.123 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.123 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:45.382 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:45.382 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:45.382 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.382 06:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:45.640 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.640 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:45.898 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.898 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:46.465 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:46.465 [2024-07-23 06:23:58.977461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:46.465 [2024-07-23 06:23:58.978065] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:46.465 [2024-07-23 06:23:58.978091] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:46.465 [2024-07-23 06:23:58.978136] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:46.465 [2024-07-23 06:23:58.978147] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.465 [2024-07-23 06:23:58.978152] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9bb434c80 name raid_bdev1, state configuring 00:10:46.465 request: 00:10:46.465 { 00:10:46.465 "name": "raid_bdev1", 00:10:46.465 "raid_level": "raid1", 00:10:46.465 "base_bdevs": [ 00:10:46.465 "malloc1", 00:10:46.465 "malloc2" 00:10:46.465 ], 00:10:46.465 "superblock": false, 00:10:46.465 "method": "bdev_raid_create", 00:10:46.466 "req_id": 1 00:10:46.466 } 00:10:46.466 Got JSON-RPC error response 00:10:46.466 response: 00:10:46.466 { 00:10:46.466 "code": -17, 00:10:46.466 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:46.466 } 00:10:46.723 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:46.724 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:46.724 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:46.724 06:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:46.724 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:46.724 06:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.981 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:46.981 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:46.981 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.295 [2024-07-23 06:23:59.533471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.295 [2024-07-23 06:23:59.533521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.295 [2024-07-23 06:23:59.533533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9bb434780 00:10:47.295 [2024-07-23 06:23:59.533541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.295 [2024-07-23 06:23:59.534197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.295 [2024-07-23 06:23:59.534222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.295 [2024-07-23 06:23:59.534248] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:47.295 [2024-07-23 06:23:59.534259] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.295 pt1 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.295 "name": "raid_bdev1", 00:10:47.295 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:47.295 "strip_size_kb": 0, 00:10:47.295 "state": "configuring", 00:10:47.295 "raid_level": "raid1", 00:10:47.295 "superblock": true, 00:10:47.295 "num_base_bdevs": 2, 00:10:47.295 "num_base_bdevs_discovered": 1, 00:10:47.295 "num_base_bdevs_operational": 2, 00:10:47.295 "base_bdevs_list": [ 00:10:47.295 { 00:10:47.295 "name": "pt1", 00:10:47.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.295 "is_configured": true, 00:10:47.295 "data_offset": 2048, 00:10:47.295 "data_size": 63488 00:10:47.295 }, 00:10:47.295 { 00:10:47.295 "name": null, 00:10:47.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.295 "is_configured": false, 00:10:47.295 "data_offset": 2048, 00:10:47.295 "data_size": 63488 00:10:47.295 } 00:10:47.295 ] 00:10:47.295 }' 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.295 06:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.865 [2024-07-23 06:24:00.337521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.865 [2024-07-23 06:24:00.337590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.865 [2024-07-23 06:24:00.337618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9bb434f00 00:10:47.865 [2024-07-23 06:24:00.337626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.865 [2024-07-23 06:24:00.337738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.865 [2024-07-23 06:24:00.337749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.865 [2024-07-23 06:24:00.337787] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:47.865 [2024-07-23 06:24:00.337796] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.865 [2024-07-23 06:24:00.337822] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9bb435180 00:10:47.865 [2024-07-23 06:24:00.337827] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.865 [2024-07-23 06:24:00.337846] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9bb497e20 00:10:47.865 [2024-07-23 06:24:00.337905] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9bb435180 00:10:47.865 [2024-07-23 06:24:00.337910] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe9bb435180 00:10:47.865 [2024-07-23 06:24:00.337933] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.865 pt2 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.865 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.433 "name": "raid_bdev1", 00:10:48.433 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:48.433 "strip_size_kb": 0, 00:10:48.433 "state": "online", 00:10:48.433 "raid_level": "raid1", 00:10:48.433 "superblock": true, 00:10:48.433 "num_base_bdevs": 2, 00:10:48.433 "num_base_bdevs_discovered": 2, 00:10:48.433 "num_base_bdevs_operational": 2, 00:10:48.433 "base_bdevs_list": [ 00:10:48.433 { 00:10:48.433 "name": "pt1", 00:10:48.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.433 "is_configured": true, 00:10:48.433 "data_offset": 2048, 00:10:48.433 "data_size": 63488 00:10:48.433 }, 00:10:48.433 { 00:10:48.433 "name": "pt2", 00:10:48.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.433 "is_configured": true, 00:10:48.433 "data_offset": 2048, 00:10:48.433 "data_size": 63488 00:10:48.433 } 00:10:48.433 ] 00:10:48.433 }' 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:48.433 06:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:48.691 [2024-07-23 06:24:01.161575] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.691 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:48.691 "name": "raid_bdev1", 00:10:48.691 "aliases": [ 00:10:48.691 "234d4568-48bc-11ef-a06c-59ddad71024c" 00:10:48.691 ], 00:10:48.691 "product_name": "Raid Volume", 00:10:48.691 "block_size": 512, 00:10:48.692 "num_blocks": 63488, 00:10:48.692 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:48.692 "assigned_rate_limits": { 00:10:48.692 "rw_ios_per_sec": 0, 00:10:48.692 "rw_mbytes_per_sec": 0, 00:10:48.692 "r_mbytes_per_sec": 0, 00:10:48.692 "w_mbytes_per_sec": 0 00:10:48.692 }, 00:10:48.692 "claimed": false, 00:10:48.692 "zoned": false, 00:10:48.692 "supported_io_types": { 00:10:48.692 "read": true, 00:10:48.692 "write": true, 00:10:48.692 "unmap": false, 00:10:48.692 "flush": false, 00:10:48.692 "reset": true, 00:10:48.692 "nvme_admin": false, 00:10:48.692 "nvme_io": false, 00:10:48.692 "nvme_io_md": false, 00:10:48.692 "write_zeroes": true, 00:10:48.692 "zcopy": false, 00:10:48.692 "get_zone_info": false, 00:10:48.692 "zone_management": false, 00:10:48.692 "zone_append": false, 00:10:48.692 "compare": false, 00:10:48.692 "compare_and_write": false, 00:10:48.692 "abort": false, 00:10:48.692 "seek_hole": false, 00:10:48.692 "seek_data": false, 00:10:48.692 "copy": false, 00:10:48.692 "nvme_iov_md": false 00:10:48.692 }, 00:10:48.692 "memory_domains": [ 00:10:48.692 { 00:10:48.692 "dma_device_id": "system", 00:10:48.692 "dma_device_type": 1 00:10:48.692 }, 00:10:48.692 { 00:10:48.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.692 "dma_device_type": 2 00:10:48.692 }, 00:10:48.692 { 00:10:48.692 "dma_device_id": "system", 00:10:48.692 "dma_device_type": 1 00:10:48.692 }, 00:10:48.692 { 00:10:48.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.692 "dma_device_type": 2 00:10:48.692 } 00:10:48.692 ], 00:10:48.692 "driver_specific": { 00:10:48.692 "raid": { 00:10:48.692 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:48.692 "strip_size_kb": 0, 00:10:48.692 "state": "online", 00:10:48.692 "raid_level": "raid1", 00:10:48.692 "superblock": true, 00:10:48.692 "num_base_bdevs": 2, 00:10:48.692 "num_base_bdevs_discovered": 2, 00:10:48.692 "num_base_bdevs_operational": 2, 00:10:48.692 "base_bdevs_list": [ 00:10:48.692 { 00:10:48.692 "name": "pt1", 00:10:48.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.692 "is_configured": true, 00:10:48.692 "data_offset": 2048, 00:10:48.692 "data_size": 63488 00:10:48.692 }, 00:10:48.692 { 00:10:48.692 "name": "pt2", 00:10:48.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.692 "is_configured": true, 00:10:48.692 "data_offset": 2048, 00:10:48.692 "data_size": 63488 00:10:48.692 } 00:10:48.692 ] 00:10:48.692 } 00:10:48.692 } 00:10:48.692 }' 00:10:48.692 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.692 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:48.692 pt2' 00:10:48.692 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.692 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:48.692 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.951 "name": "pt1", 00:10:48.951 "aliases": [ 00:10:48.951 "00000000-0000-0000-0000-000000000001" 00:10:48.951 ], 00:10:48.951 "product_name": "passthru", 00:10:48.951 "block_size": 512, 00:10:48.951 "num_blocks": 65536, 00:10:48.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.951 "assigned_rate_limits": { 00:10:48.951 "rw_ios_per_sec": 0, 00:10:48.951 "rw_mbytes_per_sec": 0, 00:10:48.951 "r_mbytes_per_sec": 0, 00:10:48.951 "w_mbytes_per_sec": 0 00:10:48.951 }, 00:10:48.951 "claimed": true, 00:10:48.951 "claim_type": "exclusive_write", 00:10:48.951 "zoned": false, 00:10:48.951 "supported_io_types": { 00:10:48.951 "read": true, 00:10:48.951 "write": true, 00:10:48.951 "unmap": true, 00:10:48.951 "flush": true, 00:10:48.951 "reset": true, 00:10:48.951 "nvme_admin": false, 00:10:48.951 "nvme_io": false, 00:10:48.951 "nvme_io_md": false, 00:10:48.951 "write_zeroes": true, 00:10:48.951 "zcopy": true, 00:10:48.951 "get_zone_info": false, 00:10:48.951 "zone_management": false, 00:10:48.951 "zone_append": false, 00:10:48.951 "compare": false, 00:10:48.951 "compare_and_write": false, 00:10:48.951 "abort": true, 00:10:48.951 "seek_hole": false, 00:10:48.951 "seek_data": false, 00:10:48.951 "copy": true, 00:10:48.951 "nvme_iov_md": false 00:10:48.951 }, 00:10:48.951 "memory_domains": [ 00:10:48.951 { 00:10:48.951 "dma_device_id": "system", 00:10:48.951 "dma_device_type": 1 00:10:48.951 }, 00:10:48.951 { 00:10:48.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.951 "dma_device_type": 2 00:10:48.951 } 00:10:48.951 ], 00:10:48.951 "driver_specific": { 00:10:48.951 "passthru": { 00:10:48.951 "name": "pt1", 00:10:48.951 "base_bdev_name": "malloc1" 00:10:48.951 } 00:10:48.951 } 00:10:48.951 }' 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.951 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.224 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:49.224 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:49.224 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:49.224 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:49.483 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:49.483 "name": "pt2", 00:10:49.484 "aliases": [ 00:10:49.484 "00000000-0000-0000-0000-000000000002" 00:10:49.484 ], 00:10:49.484 "product_name": "passthru", 00:10:49.484 "block_size": 512, 00:10:49.484 "num_blocks": 65536, 00:10:49.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.484 "assigned_rate_limits": { 00:10:49.484 "rw_ios_per_sec": 0, 00:10:49.484 "rw_mbytes_per_sec": 0, 00:10:49.484 "r_mbytes_per_sec": 0, 00:10:49.484 "w_mbytes_per_sec": 0 00:10:49.484 }, 00:10:49.484 "claimed": true, 00:10:49.484 "claim_type": "exclusive_write", 00:10:49.484 "zoned": false, 00:10:49.484 "supported_io_types": { 00:10:49.484 "read": true, 00:10:49.484 "write": true, 00:10:49.484 "unmap": true, 00:10:49.484 "flush": true, 00:10:49.484 "reset": true, 00:10:49.484 "nvme_admin": false, 00:10:49.484 "nvme_io": false, 00:10:49.484 "nvme_io_md": false, 00:10:49.484 "write_zeroes": true, 00:10:49.484 "zcopy": true, 00:10:49.484 "get_zone_info": false, 00:10:49.484 "zone_management": false, 00:10:49.484 "zone_append": false, 00:10:49.484 "compare": false, 00:10:49.484 "compare_and_write": false, 00:10:49.484 "abort": true, 00:10:49.484 "seek_hole": false, 00:10:49.484 "seek_data": false, 00:10:49.484 "copy": true, 00:10:49.484 "nvme_iov_md": false 00:10:49.484 }, 00:10:49.484 "memory_domains": [ 00:10:49.484 { 00:10:49.484 "dma_device_id": "system", 00:10:49.484 "dma_device_type": 1 00:10:49.484 }, 00:10:49.484 { 00:10:49.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.484 "dma_device_type": 2 00:10:49.484 } 00:10:49.484 ], 00:10:49.484 "driver_specific": { 00:10:49.484 "passthru": { 00:10:49.484 "name": "pt2", 00:10:49.484 "base_bdev_name": "malloc2" 00:10:49.484 } 00:10:49.484 } 00:10:49.484 }' 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:49.484 06:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:49.742 [2024-07-23 06:24:02.069599] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.742 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 234d4568-48bc-11ef-a06c-59ddad71024c '!=' 234d4568-48bc-11ef-a06c-59ddad71024c ']' 00:10:49.742 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:10:49.742 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:49.743 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:49.743 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:50.002 [2024-07-23 06:24:02.349581] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.002 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.260 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:50.260 "name": "raid_bdev1", 00:10:50.260 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:50.260 "strip_size_kb": 0, 00:10:50.260 "state": "online", 00:10:50.260 "raid_level": "raid1", 00:10:50.260 "superblock": true, 00:10:50.260 "num_base_bdevs": 2, 00:10:50.260 "num_base_bdevs_discovered": 1, 00:10:50.260 "num_base_bdevs_operational": 1, 00:10:50.260 "base_bdevs_list": [ 00:10:50.260 { 00:10:50.260 "name": null, 00:10:50.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.260 "is_configured": false, 00:10:50.260 "data_offset": 2048, 00:10:50.260 "data_size": 63488 00:10:50.260 }, 00:10:50.260 { 00:10:50.260 "name": "pt2", 00:10:50.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.260 "is_configured": true, 00:10:50.260 "data_offset": 2048, 00:10:50.260 "data_size": 63488 00:10:50.260 } 00:10:50.260 ] 00:10:50.260 }' 00:10:50.260 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:50.260 06:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 06:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:50.777 [2024-07-23 06:24:03.229593] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.777 [2024-07-23 06:24:03.229618] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.777 [2024-07-23 06:24:03.229664] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.777 [2024-07-23 06:24:03.229677] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.777 [2024-07-23 06:24:03.229682] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9bb435180 name raid_bdev1, state offline 00:10:50.777 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.777 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:10:51.035 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:10:51.035 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:10:51.035 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:10:51.035 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:10:51.035 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:51.293 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:10:51.293 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:10:51.293 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:10:51.293 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:10:51.293 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:10:51.293 06:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.551 [2024-07-23 06:24:04.045617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.551 [2024-07-23 06:24:04.045673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.551 [2024-07-23 06:24:04.045702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9bb434f00 00:10:51.551 [2024-07-23 06:24:04.045710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.551 [2024-07-23 06:24:04.046373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.551 [2024-07-23 06:24:04.046399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.551 [2024-07-23 06:24:04.046425] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.551 [2024-07-23 06:24:04.046436] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.551 [2024-07-23 06:24:04.046462] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9bb435180 00:10:51.551 [2024-07-23 06:24:04.046466] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.551 [2024-07-23 06:24:04.046486] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9bb497e20 00:10:51.551 [2024-07-23 06:24:04.046535] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9bb435180 00:10:51.551 [2024-07-23 06:24:04.046539] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe9bb435180 00:10:51.551 [2024-07-23 06:24:04.046560] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.551 pt2 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.551 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.809 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.809 "name": "raid_bdev1", 00:10:51.809 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:51.809 "strip_size_kb": 0, 00:10:51.809 "state": "online", 00:10:51.809 "raid_level": "raid1", 00:10:51.809 "superblock": true, 00:10:51.809 "num_base_bdevs": 2, 00:10:51.809 "num_base_bdevs_discovered": 1, 00:10:51.809 "num_base_bdevs_operational": 1, 00:10:51.809 "base_bdevs_list": [ 00:10:51.809 { 00:10:51.809 "name": null, 00:10:51.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.809 "is_configured": false, 00:10:51.809 "data_offset": 2048, 00:10:51.809 "data_size": 63488 00:10:51.809 }, 00:10:51.809 { 00:10:51.809 "name": "pt2", 00:10:51.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.810 "is_configured": true, 00:10:51.810 "data_offset": 2048, 00:10:51.810 "data_size": 63488 00:10:51.810 } 00:10:51.810 ] 00:10:51.810 }' 00:10:51.810 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.810 06:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.090 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:52.348 [2024-07-23 06:24:04.853651] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.348 [2024-07-23 06:24:04.853675] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.348 [2024-07-23 06:24:04.853697] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.348 [2024-07-23 06:24:04.853709] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.348 [2024-07-23 06:24:04.853713] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9bb435180 name raid_bdev1, state offline 00:10:52.606 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.606 06:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:10:52.606 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:10:52.606 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:10:52.606 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:10:52.606 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.864 [2024-07-23 06:24:05.369671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.864 [2024-07-23 06:24:05.369728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.864 [2024-07-23 06:24:05.369740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9bb434c80 00:10:52.864 [2024-07-23 06:24:05.369749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.864 [2024-07-23 06:24:05.370397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.864 [2024-07-23 06:24:05.370423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.864 [2024-07-23 06:24:05.370449] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:52.865 [2024-07-23 06:24:05.370461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.865 [2024-07-23 06:24:05.370495] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:52.865 [2024-07-23 06:24:05.370500] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.865 [2024-07-23 06:24:05.370505] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9bb434780 name raid_bdev1, state configuring 00:10:52.865 [2024-07-23 06:24:05.370512] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.865 [2024-07-23 06:24:05.370526] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9bb434780 00:10:52.865 [2024-07-23 06:24:05.370530] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.865 [2024-07-23 06:24:05.370550] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9bb497e20 00:10:52.865 [2024-07-23 06:24:05.370597] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9bb434780 00:10:52.865 [2024-07-23 06:24:05.370602] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe9bb434780 00:10:52.865 [2024-07-23 06:24:05.370622] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.865 pt1 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.123 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.124 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.382 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.382 "name": "raid_bdev1", 00:10:53.382 "uuid": "234d4568-48bc-11ef-a06c-59ddad71024c", 00:10:53.382 "strip_size_kb": 0, 00:10:53.382 "state": "online", 00:10:53.382 "raid_level": "raid1", 00:10:53.382 "superblock": true, 00:10:53.382 "num_base_bdevs": 2, 00:10:53.382 "num_base_bdevs_discovered": 1, 00:10:53.382 "num_base_bdevs_operational": 1, 00:10:53.382 "base_bdevs_list": [ 00:10:53.382 { 00:10:53.382 "name": null, 00:10:53.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.382 "is_configured": false, 00:10:53.382 "data_offset": 2048, 00:10:53.382 "data_size": 63488 00:10:53.382 }, 00:10:53.382 { 00:10:53.382 "name": "pt2", 00:10:53.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.382 "is_configured": true, 00:10:53.382 "data_offset": 2048, 00:10:53.382 "data_size": 63488 00:10:53.382 } 00:10:53.382 ] 00:10:53.382 }' 00:10:53.382 06:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.382 06:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.641 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:10:53.641 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:53.899 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:10:53.899 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:53.899 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:10:54.158 [2024-07-23 06:24:06.517776] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 234d4568-48bc-11ef-a06c-59ddad71024c '!=' 234d4568-48bc-11ef-a06c-59ddad71024c ']' 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51353 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51353 ']' 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51353 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51353 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:54.158 killing process with pid 51353 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51353' 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51353 00:10:54.158 [2024-07-23 06:24:06.547837] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.158 [2024-07-23 06:24:06.547861] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.158 [2024-07-23 06:24:06.547873] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.158 [2024-07-23 06:24:06.547877] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9bb434780 name raid_bdev1, state offline 00:10:54.158 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51353 00:10:54.158 [2024-07-23 06:24:06.559892] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.418 06:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:54.418 00:10:54.418 real 0m13.542s 00:10:54.418 user 0m24.249s 00:10:54.418 sys 0m2.059s 00:10:54.418 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.418 ************************************ 00:10:54.418 END TEST raid_superblock_test 00:10:54.418 ************************************ 00:10:54.418 06:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.418 06:24:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:54.418 06:24:06 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:54.418 06:24:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:54.418 06:24:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.418 06:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.418 ************************************ 00:10:54.418 START TEST raid_read_error_test 00:10:54.418 ************************************ 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.JeuXH5jRBt 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51746 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51746 /var/tmp/spdk-raid.sock 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51746 ']' 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.418 06:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.418 [2024-07-23 06:24:06.802730] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:54.418 [2024-07-23 06:24:06.802969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:55.044 EAL: TSC is not safe to use in SMP mode 00:10:55.044 EAL: TSC is not invariant 00:10:55.044 [2024-07-23 06:24:07.342510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.044 [2024-07-23 06:24:07.424858] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:55.044 [2024-07-23 06:24:07.427012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.044 [2024-07-23 06:24:07.427852] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.044 [2024-07-23 06:24:07.427867] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.610 06:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.610 06:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:55.610 06:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:55.610 06:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.869 BaseBdev1_malloc 00:10:55.869 06:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:55.869 true 00:10:55.869 06:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:56.127 [2024-07-23 06:24:08.627243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:56.127 [2024-07-23 06:24:08.627329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.127 [2024-07-23 06:24:08.627374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x36c99234780 00:10:56.127 [2024-07-23 06:24:08.627383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.127 [2024-07-23 06:24:08.628069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.127 [2024-07-23 06:24:08.628099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.127 BaseBdev1 00:10:56.127 06:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:56.127 06:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:56.693 BaseBdev2_malloc 00:10:56.693 06:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:56.952 true 00:10:56.952 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:56.952 [2024-07-23 06:24:09.459260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:56.952 [2024-07-23 06:24:09.459318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.952 [2024-07-23 06:24:09.459346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x36c99234c80 00:10:56.952 [2024-07-23 06:24:09.459355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.952 [2024-07-23 06:24:09.460052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.952 [2024-07-23 06:24:09.460080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:56.952 BaseBdev2 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:57.210 [2024-07-23 06:24:09.687282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.210 [2024-07-23 06:24:09.687896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.210 [2024-07-23 06:24:09.687967] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x36c99234f00 00:10:57.210 [2024-07-23 06:24:09.687974] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:57.210 [2024-07-23 06:24:09.688008] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x36c992a0e20 00:10:57.210 [2024-07-23 06:24:09.688087] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x36c99234f00 00:10:57.210 [2024-07-23 06:24:09.688092] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x36c99234f00 00:10:57.210 [2024-07-23 06:24:09.688133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.210 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.777 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.777 "name": "raid_bdev1", 00:10:57.777 "uuid": "2bc0a90f-48bc-11ef-a06c-59ddad71024c", 00:10:57.777 "strip_size_kb": 0, 00:10:57.777 "state": "online", 00:10:57.777 "raid_level": "raid1", 00:10:57.777 "superblock": true, 00:10:57.777 "num_base_bdevs": 2, 00:10:57.777 "num_base_bdevs_discovered": 2, 00:10:57.777 "num_base_bdevs_operational": 2, 00:10:57.777 "base_bdevs_list": [ 00:10:57.777 { 00:10:57.777 "name": "BaseBdev1", 00:10:57.777 "uuid": "1f5b97fe-325d-d45e-943e-b32636a09bcd", 00:10:57.777 "is_configured": true, 00:10:57.777 "data_offset": 2048, 00:10:57.777 "data_size": 63488 00:10:57.777 }, 00:10:57.777 { 00:10:57.777 "name": "BaseBdev2", 00:10:57.777 "uuid": "6b86526f-76a1-0a56-9e34-ca3980f413c9", 00:10:57.777 "is_configured": true, 00:10:57.777 "data_offset": 2048, 00:10:57.777 "data_size": 63488 00:10:57.777 } 00:10:57.777 ] 00:10:57.777 }' 00:10:57.777 06:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.777 06:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.777 06:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:57.777 06:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:58.036 [2024-07-23 06:24:10.415487] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x36c992a0ec0 00:10:58.971 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.230 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.508 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.508 "name": "raid_bdev1", 00:10:59.508 "uuid": "2bc0a90f-48bc-11ef-a06c-59ddad71024c", 00:10:59.508 "strip_size_kb": 0, 00:10:59.508 "state": "online", 00:10:59.508 "raid_level": "raid1", 00:10:59.508 "superblock": true, 00:10:59.508 "num_base_bdevs": 2, 00:10:59.508 "num_base_bdevs_discovered": 2, 00:10:59.508 "num_base_bdevs_operational": 2, 00:10:59.508 "base_bdevs_list": [ 00:10:59.508 { 00:10:59.508 "name": "BaseBdev1", 00:10:59.508 "uuid": "1f5b97fe-325d-d45e-943e-b32636a09bcd", 00:10:59.508 "is_configured": true, 00:10:59.508 "data_offset": 2048, 00:10:59.508 "data_size": 63488 00:10:59.508 }, 00:10:59.508 { 00:10:59.508 "name": "BaseBdev2", 00:10:59.508 "uuid": "6b86526f-76a1-0a56-9e34-ca3980f413c9", 00:10:59.508 "is_configured": true, 00:10:59.508 "data_offset": 2048, 00:10:59.508 "data_size": 63488 00:10:59.508 } 00:10:59.508 ] 00:10:59.508 }' 00:10:59.508 06:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.508 06:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.771 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:00.030 [2024-07-23 06:24:12.443775] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.030 [2024-07-23 06:24:12.443806] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.030 [2024-07-23 06:24:12.444143] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.030 [2024-07-23 06:24:12.444154] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.030 [2024-07-23 06:24:12.444167] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.030 [2024-07-23 06:24:12.444171] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x36c99234f00 name raid_bdev1, state offline 00:11:00.030 0 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51746 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51746 ']' 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51746 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51746 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:00.030 killing process with pid 51746 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51746' 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51746 00:11:00.030 [2024-07-23 06:24:12.474439] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.030 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51746 00:11:00.030 [2024-07-23 06:24:12.486389] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.JeuXH5jRBt 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:00.290 00:11:00.290 real 0m5.888s 00:11:00.290 user 0m9.028s 00:11:00.290 sys 0m1.033s 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.290 06:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.290 ************************************ 00:11:00.290 END TEST raid_read_error_test 00:11:00.290 ************************************ 00:11:00.290 06:24:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:00.290 06:24:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:00.290 06:24:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:00.290 06:24:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.290 06:24:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.290 ************************************ 00:11:00.290 START TEST raid_write_error_test 00:11:00.290 ************************************ 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.zXoaTScbYc 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51870 00:11:00.290 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51870 /var/tmp/spdk-raid.sock 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51870 ']' 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:00.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.291 06:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.291 [2024-07-23 06:24:12.729123] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:00.291 [2024-07-23 06:24:12.729323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:00.873 EAL: TSC is not safe to use in SMP mode 00:11:00.873 EAL: TSC is not invariant 00:11:00.873 [2024-07-23 06:24:13.275530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.873 [2024-07-23 06:24:13.362899] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:00.873 [2024-07-23 06:24:13.365037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.873 [2024-07-23 06:24:13.365820] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.873 [2024-07-23 06:24:13.365831] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.440 06:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.440 06:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:01.440 06:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:01.440 06:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.440 BaseBdev1_malloc 00:11:01.440 06:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:01.698 true 00:11:01.698 06:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.958 [2024-07-23 06:24:14.449778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.958 [2024-07-23 06:24:14.449858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.958 [2024-07-23 06:24:14.449895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13662c234780 00:11:01.958 [2024-07-23 06:24:14.449908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.958 [2024-07-23 06:24:14.450614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.958 [2024-07-23 06:24:14.450639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.958 BaseBdev1 00:11:01.958 06:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:01.958 06:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:02.217 BaseBdev2_malloc 00:11:02.217 06:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:02.784 true 00:11:02.784 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:02.784 [2024-07-23 06:24:15.245827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:02.784 [2024-07-23 06:24:15.245902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.784 [2024-07-23 06:24:15.245928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13662c234c80 00:11:02.784 [2024-07-23 06:24:15.245938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.784 [2024-07-23 06:24:15.246593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.784 [2024-07-23 06:24:15.246621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:02.784 BaseBdev2 00:11:02.785 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:03.043 [2024-07-23 06:24:15.473852] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.043 [2024-07-23 06:24:15.474433] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.043 [2024-07-23 06:24:15.474500] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x13662c234f00 00:11:03.043 [2024-07-23 06:24:15.474506] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.043 [2024-07-23 06:24:15.474540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13662c2a0e20 00:11:03.043 [2024-07-23 06:24:15.474615] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13662c234f00 00:11:03.043 [2024-07-23 06:24:15.474620] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13662c234f00 00:11:03.043 [2024-07-23 06:24:15.474648] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.043 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.303 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.303 "name": "raid_bdev1", 00:11:03.303 "uuid": "2f339f00-48bc-11ef-a06c-59ddad71024c", 00:11:03.303 "strip_size_kb": 0, 00:11:03.303 "state": "online", 00:11:03.303 "raid_level": "raid1", 00:11:03.303 "superblock": true, 00:11:03.303 "num_base_bdevs": 2, 00:11:03.303 "num_base_bdevs_discovered": 2, 00:11:03.303 "num_base_bdevs_operational": 2, 00:11:03.303 "base_bdevs_list": [ 00:11:03.303 { 00:11:03.303 "name": "BaseBdev1", 00:11:03.303 "uuid": "15670104-72f1-6757-a842-5d78e8339049", 00:11:03.303 "is_configured": true, 00:11:03.303 "data_offset": 2048, 00:11:03.303 "data_size": 63488 00:11:03.303 }, 00:11:03.303 { 00:11:03.303 "name": "BaseBdev2", 00:11:03.303 "uuid": "bb91e72d-99c6-a950-869d-f5f9be3e69ab", 00:11:03.303 "is_configured": true, 00:11:03.303 "data_offset": 2048, 00:11:03.303 "data_size": 63488 00:11:03.303 } 00:11:03.303 ] 00:11:03.303 }' 00:11:03.303 06:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.303 06:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 06:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:03.562 06:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:03.827 [2024-07-23 06:24:16.202127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13662c2a0ec0 00:11:04.777 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:05.036 [2024-07-23 06:24:17.394718] bdev_raid.c:2248:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:05.036 [2024-07-23 06:24:17.394783] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.036 [2024-07-23 06:24:17.394925] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x13662c2a0ec0 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.036 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.296 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.296 "name": "raid_bdev1", 00:11:05.296 "uuid": "2f339f00-48bc-11ef-a06c-59ddad71024c", 00:11:05.296 "strip_size_kb": 0, 00:11:05.296 "state": "online", 00:11:05.296 "raid_level": "raid1", 00:11:05.296 "superblock": true, 00:11:05.296 "num_base_bdevs": 2, 00:11:05.296 "num_base_bdevs_discovered": 1, 00:11:05.296 "num_base_bdevs_operational": 1, 00:11:05.296 "base_bdevs_list": [ 00:11:05.296 { 00:11:05.296 "name": null, 00:11:05.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.297 "is_configured": false, 00:11:05.297 "data_offset": 2048, 00:11:05.297 "data_size": 63488 00:11:05.297 }, 00:11:05.297 { 00:11:05.297 "name": "BaseBdev2", 00:11:05.297 "uuid": "bb91e72d-99c6-a950-869d-f5f9be3e69ab", 00:11:05.297 "is_configured": true, 00:11:05.297 "data_offset": 2048, 00:11:05.297 "data_size": 63488 00:11:05.297 } 00:11:05.297 ] 00:11:05.297 }' 00:11:05.297 06:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.297 06:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.555 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:05.815 [2024-07-23 06:24:18.283904] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.815 [2024-07-23 06:24:18.283932] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.815 [2024-07-23 06:24:18.284261] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.815 [2024-07-23 06:24:18.284271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.815 [2024-07-23 06:24:18.284281] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.815 [2024-07-23 06:24:18.284286] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13662c234f00 name raid_bdev1, state offline 00:11:05.815 0 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51870 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51870 ']' 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51870 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51870 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:05.815 killing process with pid 51870 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51870' 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51870 00:11:05.815 [2024-07-23 06:24:18.316805] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.815 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51870 00:11:05.815 [2024-07-23 06:24:18.328290] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.zXoaTScbYc 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:06.075 00:11:06.075 real 0m5.798s 00:11:06.075 user 0m8.922s 00:11:06.075 sys 0m0.975s 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.075 06:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.075 ************************************ 00:11:06.075 END TEST raid_write_error_test 00:11:06.075 ************************************ 00:11:06.075 06:24:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:06.075 06:24:18 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:06.075 06:24:18 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:06.075 06:24:18 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:06.075 06:24:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:06.075 06:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.075 06:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.075 ************************************ 00:11:06.075 START TEST raid_state_function_test 00:11:06.075 ************************************ 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51996 00:11:06.075 Process raid pid: 51996 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51996' 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51996 /var/tmp/spdk-raid.sock 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51996 ']' 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.075 06:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:06.075 [2024-07-23 06:24:18.571810] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:06.075 [2024-07-23 06:24:18.572305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:06.643 EAL: TSC is not safe to use in SMP mode 00:11:06.643 EAL: TSC is not invariant 00:11:06.643 [2024-07-23 06:24:19.095852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.902 [2024-07-23 06:24:19.180294] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:06.902 [2024-07-23 06:24:19.182422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.902 [2024-07-23 06:24:19.183189] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.902 [2024-07-23 06:24:19.183211] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.167 06:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.167 06:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:07.167 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:07.449 [2024-07-23 06:24:19.874726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.449 [2024-07-23 06:24:19.874779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.449 [2024-07-23 06:24:19.874784] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.450 [2024-07-23 06:24:19.874793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.450 [2024-07-23 06:24:19.874797] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.450 [2024-07-23 06:24:19.874805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.450 06:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.708 06:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.708 "name": "Existed_Raid", 00:11:07.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.708 "strip_size_kb": 64, 00:11:07.708 "state": "configuring", 00:11:07.708 "raid_level": "raid0", 00:11:07.708 "superblock": false, 00:11:07.708 "num_base_bdevs": 3, 00:11:07.708 "num_base_bdevs_discovered": 0, 00:11:07.708 "num_base_bdevs_operational": 3, 00:11:07.708 "base_bdevs_list": [ 00:11:07.708 { 00:11:07.708 "name": "BaseBdev1", 00:11:07.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.708 "is_configured": false, 00:11:07.708 "data_offset": 0, 00:11:07.708 "data_size": 0 00:11:07.708 }, 00:11:07.708 { 00:11:07.708 "name": "BaseBdev2", 00:11:07.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.708 "is_configured": false, 00:11:07.708 "data_offset": 0, 00:11:07.708 "data_size": 0 00:11:07.708 }, 00:11:07.708 { 00:11:07.708 "name": "BaseBdev3", 00:11:07.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.708 "is_configured": false, 00:11:07.708 "data_offset": 0, 00:11:07.708 "data_size": 0 00:11:07.708 } 00:11:07.708 ] 00:11:07.708 }' 00:11:07.708 06:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.708 06:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.966 06:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:08.224 [2024-07-23 06:24:20.694725] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.224 [2024-07-23 06:24:20.694750] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34f176034500 name Existed_Raid, state configuring 00:11:08.224 06:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:08.483 [2024-07-23 06:24:20.978748] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.483 [2024-07-23 06:24:20.978811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.483 [2024-07-23 06:24:20.978816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.483 [2024-07-23 06:24:20.978842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.483 [2024-07-23 06:24:20.978845] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.483 [2024-07-23 06:24:20.978853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.483 06:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.051 [2024-07-23 06:24:21.263784] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.051 BaseBdev1 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:09.051 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.619 [ 00:11:09.619 { 00:11:09.619 "name": "BaseBdev1", 00:11:09.619 "aliases": [ 00:11:09.619 "32a6f185-48bc-11ef-a06c-59ddad71024c" 00:11:09.619 ], 00:11:09.619 "product_name": "Malloc disk", 00:11:09.619 "block_size": 512, 00:11:09.619 "num_blocks": 65536, 00:11:09.619 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:09.619 "assigned_rate_limits": { 00:11:09.619 "rw_ios_per_sec": 0, 00:11:09.619 "rw_mbytes_per_sec": 0, 00:11:09.619 "r_mbytes_per_sec": 0, 00:11:09.619 "w_mbytes_per_sec": 0 00:11:09.619 }, 00:11:09.619 "claimed": true, 00:11:09.619 "claim_type": "exclusive_write", 00:11:09.619 "zoned": false, 00:11:09.619 "supported_io_types": { 00:11:09.619 "read": true, 00:11:09.619 "write": true, 00:11:09.619 "unmap": true, 00:11:09.619 "flush": true, 00:11:09.619 "reset": true, 00:11:09.619 "nvme_admin": false, 00:11:09.619 "nvme_io": false, 00:11:09.619 "nvme_io_md": false, 00:11:09.619 "write_zeroes": true, 00:11:09.619 "zcopy": true, 00:11:09.619 "get_zone_info": false, 00:11:09.619 "zone_management": false, 00:11:09.619 "zone_append": false, 00:11:09.619 "compare": false, 00:11:09.619 "compare_and_write": false, 00:11:09.619 "abort": true, 00:11:09.619 "seek_hole": false, 00:11:09.619 "seek_data": false, 00:11:09.619 "copy": true, 00:11:09.619 "nvme_iov_md": false 00:11:09.619 }, 00:11:09.619 "memory_domains": [ 00:11:09.619 { 00:11:09.619 "dma_device_id": "system", 00:11:09.619 "dma_device_type": 1 00:11:09.619 }, 00:11:09.619 { 00:11:09.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.619 "dma_device_type": 2 00:11:09.619 } 00:11:09.619 ], 00:11:09.619 "driver_specific": {} 00:11:09.619 } 00:11:09.619 ] 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.619 06:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.620 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.620 "name": "Existed_Raid", 00:11:09.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.620 "strip_size_kb": 64, 00:11:09.620 "state": "configuring", 00:11:09.620 "raid_level": "raid0", 00:11:09.620 "superblock": false, 00:11:09.620 "num_base_bdevs": 3, 00:11:09.620 "num_base_bdevs_discovered": 1, 00:11:09.620 "num_base_bdevs_operational": 3, 00:11:09.620 "base_bdevs_list": [ 00:11:09.620 { 00:11:09.620 "name": "BaseBdev1", 00:11:09.620 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:09.620 "is_configured": true, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 65536 00:11:09.620 }, 00:11:09.620 { 00:11:09.620 "name": "BaseBdev2", 00:11:09.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.620 "is_configured": false, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 0 00:11:09.620 }, 00:11:09.620 { 00:11:09.620 "name": "BaseBdev3", 00:11:09.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.620 "is_configured": false, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 0 00:11:09.620 } 00:11:09.620 ] 00:11:09.620 }' 00:11:09.620 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.620 06:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.879 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:10.146 [2024-07-23 06:24:22.638790] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.146 [2024-07-23 06:24:22.638819] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34f176034500 name Existed_Raid, state configuring 00:11:10.146 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:10.426 [2024-07-23 06:24:22.930815] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.426 [2024-07-23 06:24:22.931611] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.426 [2024-07-23 06:24:22.931649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.426 [2024-07-23 06:24:22.931654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.426 [2024-07-23 06:24:22.931663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.687 06:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.687 06:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.687 "name": "Existed_Raid", 00:11:10.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.687 "strip_size_kb": 64, 00:11:10.687 "state": "configuring", 00:11:10.687 "raid_level": "raid0", 00:11:10.687 "superblock": false, 00:11:10.687 "num_base_bdevs": 3, 00:11:10.687 "num_base_bdevs_discovered": 1, 00:11:10.687 "num_base_bdevs_operational": 3, 00:11:10.687 "base_bdevs_list": [ 00:11:10.687 { 00:11:10.687 "name": "BaseBdev1", 00:11:10.687 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:10.687 "is_configured": true, 00:11:10.687 "data_offset": 0, 00:11:10.687 "data_size": 65536 00:11:10.687 }, 00:11:10.687 { 00:11:10.687 "name": "BaseBdev2", 00:11:10.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.687 "is_configured": false, 00:11:10.687 "data_offset": 0, 00:11:10.687 "data_size": 0 00:11:10.687 }, 00:11:10.687 { 00:11:10.687 "name": "BaseBdev3", 00:11:10.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.687 "is_configured": false, 00:11:10.687 "data_offset": 0, 00:11:10.687 "data_size": 0 00:11:10.687 } 00:11:10.687 ] 00:11:10.687 }' 00:11:10.687 06:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.687 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.255 [2024-07-23 06:24:23.734980] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.255 BaseBdev2 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:11.255 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:11.514 06:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:11.774 [ 00:11:11.774 { 00:11:11.774 "name": "BaseBdev2", 00:11:11.774 "aliases": [ 00:11:11.774 "342026a3-48bc-11ef-a06c-59ddad71024c" 00:11:11.774 ], 00:11:11.774 "product_name": "Malloc disk", 00:11:11.774 "block_size": 512, 00:11:11.774 "num_blocks": 65536, 00:11:11.774 "uuid": "342026a3-48bc-11ef-a06c-59ddad71024c", 00:11:11.774 "assigned_rate_limits": { 00:11:11.774 "rw_ios_per_sec": 0, 00:11:11.774 "rw_mbytes_per_sec": 0, 00:11:11.774 "r_mbytes_per_sec": 0, 00:11:11.774 "w_mbytes_per_sec": 0 00:11:11.774 }, 00:11:11.774 "claimed": true, 00:11:11.774 "claim_type": "exclusive_write", 00:11:11.774 "zoned": false, 00:11:11.774 "supported_io_types": { 00:11:11.774 "read": true, 00:11:11.774 "write": true, 00:11:11.774 "unmap": true, 00:11:11.774 "flush": true, 00:11:11.774 "reset": true, 00:11:11.774 "nvme_admin": false, 00:11:11.774 "nvme_io": false, 00:11:11.774 "nvme_io_md": false, 00:11:11.774 "write_zeroes": true, 00:11:11.774 "zcopy": true, 00:11:11.774 "get_zone_info": false, 00:11:11.774 "zone_management": false, 00:11:11.774 "zone_append": false, 00:11:11.774 "compare": false, 00:11:11.774 "compare_and_write": false, 00:11:11.774 "abort": true, 00:11:11.774 "seek_hole": false, 00:11:11.774 "seek_data": false, 00:11:11.774 "copy": true, 00:11:11.774 "nvme_iov_md": false 00:11:11.774 }, 00:11:11.774 "memory_domains": [ 00:11:11.774 { 00:11:11.774 "dma_device_id": "system", 00:11:11.774 "dma_device_type": 1 00:11:11.774 }, 00:11:11.774 { 00:11:11.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.774 "dma_device_type": 2 00:11:11.774 } 00:11:11.774 ], 00:11:11.774 "driver_specific": {} 00:11:11.774 } 00:11:11.774 ] 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.774 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.033 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.034 "name": "Existed_Raid", 00:11:12.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.034 "strip_size_kb": 64, 00:11:12.034 "state": "configuring", 00:11:12.034 "raid_level": "raid0", 00:11:12.034 "superblock": false, 00:11:12.034 "num_base_bdevs": 3, 00:11:12.034 "num_base_bdevs_discovered": 2, 00:11:12.034 "num_base_bdevs_operational": 3, 00:11:12.034 "base_bdevs_list": [ 00:11:12.034 { 00:11:12.034 "name": "BaseBdev1", 00:11:12.034 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:12.034 "is_configured": true, 00:11:12.034 "data_offset": 0, 00:11:12.034 "data_size": 65536 00:11:12.034 }, 00:11:12.034 { 00:11:12.034 "name": "BaseBdev2", 00:11:12.034 "uuid": "342026a3-48bc-11ef-a06c-59ddad71024c", 00:11:12.034 "is_configured": true, 00:11:12.034 "data_offset": 0, 00:11:12.034 "data_size": 65536 00:11:12.034 }, 00:11:12.034 { 00:11:12.034 "name": "BaseBdev3", 00:11:12.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.034 "is_configured": false, 00:11:12.034 "data_offset": 0, 00:11:12.034 "data_size": 0 00:11:12.034 } 00:11:12.034 ] 00:11:12.034 }' 00:11:12.034 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.034 06:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.292 06:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.551 [2024-07-23 06:24:25.027001] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.551 [2024-07-23 06:24:25.027029] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x34f176034a00 00:11:12.551 [2024-07-23 06:24:25.027034] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:12.551 [2024-07-23 06:24:25.027056] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34f176097e20 00:11:12.551 [2024-07-23 06:24:25.027154] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34f176034a00 00:11:12.551 [2024-07-23 06:24:25.027158] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34f176034a00 00:11:12.551 [2024-07-23 06:24:25.027190] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.551 BaseBdev3 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:12.551 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:12.810 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.082 [ 00:11:13.082 { 00:11:13.082 "name": "BaseBdev3", 00:11:13.082 "aliases": [ 00:11:13.082 "34e54cb9-48bc-11ef-a06c-59ddad71024c" 00:11:13.082 ], 00:11:13.082 "product_name": "Malloc disk", 00:11:13.082 "block_size": 512, 00:11:13.082 "num_blocks": 65536, 00:11:13.082 "uuid": "34e54cb9-48bc-11ef-a06c-59ddad71024c", 00:11:13.082 "assigned_rate_limits": { 00:11:13.082 "rw_ios_per_sec": 0, 00:11:13.082 "rw_mbytes_per_sec": 0, 00:11:13.082 "r_mbytes_per_sec": 0, 00:11:13.082 "w_mbytes_per_sec": 0 00:11:13.082 }, 00:11:13.082 "claimed": true, 00:11:13.082 "claim_type": "exclusive_write", 00:11:13.082 "zoned": false, 00:11:13.082 "supported_io_types": { 00:11:13.082 "read": true, 00:11:13.082 "write": true, 00:11:13.082 "unmap": true, 00:11:13.082 "flush": true, 00:11:13.082 "reset": true, 00:11:13.082 "nvme_admin": false, 00:11:13.082 "nvme_io": false, 00:11:13.082 "nvme_io_md": false, 00:11:13.082 "write_zeroes": true, 00:11:13.082 "zcopy": true, 00:11:13.082 "get_zone_info": false, 00:11:13.082 "zone_management": false, 00:11:13.082 "zone_append": false, 00:11:13.082 "compare": false, 00:11:13.082 "compare_and_write": false, 00:11:13.082 "abort": true, 00:11:13.082 "seek_hole": false, 00:11:13.082 "seek_data": false, 00:11:13.082 "copy": true, 00:11:13.082 "nvme_iov_md": false 00:11:13.082 }, 00:11:13.082 "memory_domains": [ 00:11:13.082 { 00:11:13.082 "dma_device_id": "system", 00:11:13.082 "dma_device_type": 1 00:11:13.082 }, 00:11:13.082 { 00:11:13.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.082 "dma_device_type": 2 00:11:13.082 } 00:11:13.082 ], 00:11:13.082 "driver_specific": {} 00:11:13.082 } 00:11:13.082 ] 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.082 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.341 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.341 "name": "Existed_Raid", 00:11:13.341 "uuid": "34e552b9-48bc-11ef-a06c-59ddad71024c", 00:11:13.341 "strip_size_kb": 64, 00:11:13.341 "state": "online", 00:11:13.341 "raid_level": "raid0", 00:11:13.341 "superblock": false, 00:11:13.341 "num_base_bdevs": 3, 00:11:13.341 "num_base_bdevs_discovered": 3, 00:11:13.341 "num_base_bdevs_operational": 3, 00:11:13.341 "base_bdevs_list": [ 00:11:13.341 { 00:11:13.341 "name": "BaseBdev1", 00:11:13.341 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:13.341 "is_configured": true, 00:11:13.341 "data_offset": 0, 00:11:13.341 "data_size": 65536 00:11:13.341 }, 00:11:13.341 { 00:11:13.341 "name": "BaseBdev2", 00:11:13.341 "uuid": "342026a3-48bc-11ef-a06c-59ddad71024c", 00:11:13.341 "is_configured": true, 00:11:13.341 "data_offset": 0, 00:11:13.341 "data_size": 65536 00:11:13.341 }, 00:11:13.341 { 00:11:13.341 "name": "BaseBdev3", 00:11:13.341 "uuid": "34e54cb9-48bc-11ef-a06c-59ddad71024c", 00:11:13.341 "is_configured": true, 00:11:13.341 "data_offset": 0, 00:11:13.341 "data_size": 65536 00:11:13.341 } 00:11:13.341 ] 00:11:13.341 }' 00:11:13.341 06:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.341 06:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:13.909 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:13.909 [2024-07-23 06:24:26.410966] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.168 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:14.168 "name": "Existed_Raid", 00:11:14.168 "aliases": [ 00:11:14.168 "34e552b9-48bc-11ef-a06c-59ddad71024c" 00:11:14.168 ], 00:11:14.168 "product_name": "Raid Volume", 00:11:14.168 "block_size": 512, 00:11:14.168 "num_blocks": 196608, 00:11:14.168 "uuid": "34e552b9-48bc-11ef-a06c-59ddad71024c", 00:11:14.168 "assigned_rate_limits": { 00:11:14.168 "rw_ios_per_sec": 0, 00:11:14.168 "rw_mbytes_per_sec": 0, 00:11:14.168 "r_mbytes_per_sec": 0, 00:11:14.168 "w_mbytes_per_sec": 0 00:11:14.168 }, 00:11:14.168 "claimed": false, 00:11:14.168 "zoned": false, 00:11:14.168 "supported_io_types": { 00:11:14.168 "read": true, 00:11:14.168 "write": true, 00:11:14.168 "unmap": true, 00:11:14.168 "flush": true, 00:11:14.168 "reset": true, 00:11:14.168 "nvme_admin": false, 00:11:14.168 "nvme_io": false, 00:11:14.168 "nvme_io_md": false, 00:11:14.168 "write_zeroes": true, 00:11:14.168 "zcopy": false, 00:11:14.168 "get_zone_info": false, 00:11:14.168 "zone_management": false, 00:11:14.168 "zone_append": false, 00:11:14.168 "compare": false, 00:11:14.168 "compare_and_write": false, 00:11:14.168 "abort": false, 00:11:14.168 "seek_hole": false, 00:11:14.168 "seek_data": false, 00:11:14.168 "copy": false, 00:11:14.168 "nvme_iov_md": false 00:11:14.168 }, 00:11:14.168 "memory_domains": [ 00:11:14.168 { 00:11:14.168 "dma_device_id": "system", 00:11:14.168 "dma_device_type": 1 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.168 "dma_device_type": 2 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "dma_device_id": "system", 00:11:14.168 "dma_device_type": 1 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.168 "dma_device_type": 2 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "dma_device_id": "system", 00:11:14.168 "dma_device_type": 1 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.168 "dma_device_type": 2 00:11:14.168 } 00:11:14.168 ], 00:11:14.168 "driver_specific": { 00:11:14.168 "raid": { 00:11:14.168 "uuid": "34e552b9-48bc-11ef-a06c-59ddad71024c", 00:11:14.168 "strip_size_kb": 64, 00:11:14.168 "state": "online", 00:11:14.168 "raid_level": "raid0", 00:11:14.168 "superblock": false, 00:11:14.168 "num_base_bdevs": 3, 00:11:14.168 "num_base_bdevs_discovered": 3, 00:11:14.168 "num_base_bdevs_operational": 3, 00:11:14.168 "base_bdevs_list": [ 00:11:14.168 { 00:11:14.168 "name": "BaseBdev1", 00:11:14.168 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:14.168 "is_configured": true, 00:11:14.168 "data_offset": 0, 00:11:14.168 "data_size": 65536 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "name": "BaseBdev2", 00:11:14.168 "uuid": "342026a3-48bc-11ef-a06c-59ddad71024c", 00:11:14.168 "is_configured": true, 00:11:14.168 "data_offset": 0, 00:11:14.168 "data_size": 65536 00:11:14.168 }, 00:11:14.168 { 00:11:14.168 "name": "BaseBdev3", 00:11:14.168 "uuid": "34e54cb9-48bc-11ef-a06c-59ddad71024c", 00:11:14.168 "is_configured": true, 00:11:14.168 "data_offset": 0, 00:11:14.168 "data_size": 65536 00:11:14.168 } 00:11:14.168 ] 00:11:14.168 } 00:11:14.168 } 00:11:14.168 }' 00:11:14.168 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.168 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:14.168 BaseBdev2 00:11:14.168 BaseBdev3' 00:11:14.168 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.168 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.168 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.428 "name": "BaseBdev1", 00:11:14.428 "aliases": [ 00:11:14.428 "32a6f185-48bc-11ef-a06c-59ddad71024c" 00:11:14.428 ], 00:11:14.428 "product_name": "Malloc disk", 00:11:14.428 "block_size": 512, 00:11:14.428 "num_blocks": 65536, 00:11:14.428 "uuid": "32a6f185-48bc-11ef-a06c-59ddad71024c", 00:11:14.428 "assigned_rate_limits": { 00:11:14.428 "rw_ios_per_sec": 0, 00:11:14.428 "rw_mbytes_per_sec": 0, 00:11:14.428 "r_mbytes_per_sec": 0, 00:11:14.428 "w_mbytes_per_sec": 0 00:11:14.428 }, 00:11:14.428 "claimed": true, 00:11:14.428 "claim_type": "exclusive_write", 00:11:14.428 "zoned": false, 00:11:14.428 "supported_io_types": { 00:11:14.428 "read": true, 00:11:14.428 "write": true, 00:11:14.428 "unmap": true, 00:11:14.428 "flush": true, 00:11:14.428 "reset": true, 00:11:14.428 "nvme_admin": false, 00:11:14.428 "nvme_io": false, 00:11:14.428 "nvme_io_md": false, 00:11:14.428 "write_zeroes": true, 00:11:14.428 "zcopy": true, 00:11:14.428 "get_zone_info": false, 00:11:14.428 "zone_management": false, 00:11:14.428 "zone_append": false, 00:11:14.428 "compare": false, 00:11:14.428 "compare_and_write": false, 00:11:14.428 "abort": true, 00:11:14.428 "seek_hole": false, 00:11:14.428 "seek_data": false, 00:11:14.428 "copy": true, 00:11:14.428 "nvme_iov_md": false 00:11:14.428 }, 00:11:14.428 "memory_domains": [ 00:11:14.428 { 00:11:14.428 "dma_device_id": "system", 00:11:14.428 "dma_device_type": 1 00:11:14.428 }, 00:11:14.428 { 00:11:14.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.428 "dma_device_type": 2 00:11:14.428 } 00:11:14.428 ], 00:11:14.428 "driver_specific": {} 00:11:14.428 }' 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:14.428 06:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.687 "name": "BaseBdev2", 00:11:14.687 "aliases": [ 00:11:14.687 "342026a3-48bc-11ef-a06c-59ddad71024c" 00:11:14.687 ], 00:11:14.687 "product_name": "Malloc disk", 00:11:14.687 "block_size": 512, 00:11:14.687 "num_blocks": 65536, 00:11:14.687 "uuid": "342026a3-48bc-11ef-a06c-59ddad71024c", 00:11:14.687 "assigned_rate_limits": { 00:11:14.687 "rw_ios_per_sec": 0, 00:11:14.687 "rw_mbytes_per_sec": 0, 00:11:14.687 "r_mbytes_per_sec": 0, 00:11:14.687 "w_mbytes_per_sec": 0 00:11:14.687 }, 00:11:14.687 "claimed": true, 00:11:14.687 "claim_type": "exclusive_write", 00:11:14.687 "zoned": false, 00:11:14.687 "supported_io_types": { 00:11:14.687 "read": true, 00:11:14.687 "write": true, 00:11:14.687 "unmap": true, 00:11:14.687 "flush": true, 00:11:14.687 "reset": true, 00:11:14.687 "nvme_admin": false, 00:11:14.687 "nvme_io": false, 00:11:14.687 "nvme_io_md": false, 00:11:14.687 "write_zeroes": true, 00:11:14.687 "zcopy": true, 00:11:14.687 "get_zone_info": false, 00:11:14.687 "zone_management": false, 00:11:14.687 "zone_append": false, 00:11:14.687 "compare": false, 00:11:14.687 "compare_and_write": false, 00:11:14.687 "abort": true, 00:11:14.687 "seek_hole": false, 00:11:14.687 "seek_data": false, 00:11:14.687 "copy": true, 00:11:14.687 "nvme_iov_md": false 00:11:14.687 }, 00:11:14.687 "memory_domains": [ 00:11:14.687 { 00:11:14.687 "dma_device_id": "system", 00:11:14.687 "dma_device_type": 1 00:11:14.687 }, 00:11:14.687 { 00:11:14.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.687 "dma_device_type": 2 00:11:14.687 } 00:11:14.687 ], 00:11:14.687 "driver_specific": {} 00:11:14.687 }' 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:14.687 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.946 "name": "BaseBdev3", 00:11:14.946 "aliases": [ 00:11:14.946 "34e54cb9-48bc-11ef-a06c-59ddad71024c" 00:11:14.946 ], 00:11:14.946 "product_name": "Malloc disk", 00:11:14.946 "block_size": 512, 00:11:14.946 "num_blocks": 65536, 00:11:14.946 "uuid": "34e54cb9-48bc-11ef-a06c-59ddad71024c", 00:11:14.946 "assigned_rate_limits": { 00:11:14.946 "rw_ios_per_sec": 0, 00:11:14.946 "rw_mbytes_per_sec": 0, 00:11:14.946 "r_mbytes_per_sec": 0, 00:11:14.946 "w_mbytes_per_sec": 0 00:11:14.946 }, 00:11:14.946 "claimed": true, 00:11:14.946 "claim_type": "exclusive_write", 00:11:14.946 "zoned": false, 00:11:14.946 "supported_io_types": { 00:11:14.946 "read": true, 00:11:14.946 "write": true, 00:11:14.946 "unmap": true, 00:11:14.946 "flush": true, 00:11:14.946 "reset": true, 00:11:14.946 "nvme_admin": false, 00:11:14.946 "nvme_io": false, 00:11:14.946 "nvme_io_md": false, 00:11:14.946 "write_zeroes": true, 00:11:14.946 "zcopy": true, 00:11:14.946 "get_zone_info": false, 00:11:14.946 "zone_management": false, 00:11:14.946 "zone_append": false, 00:11:14.946 "compare": false, 00:11:14.946 "compare_and_write": false, 00:11:14.946 "abort": true, 00:11:14.946 "seek_hole": false, 00:11:14.946 "seek_data": false, 00:11:14.946 "copy": true, 00:11:14.946 "nvme_iov_md": false 00:11:14.946 }, 00:11:14.946 "memory_domains": [ 00:11:14.946 { 00:11:14.946 "dma_device_id": "system", 00:11:14.946 "dma_device_type": 1 00:11:14.946 }, 00:11:14.946 { 00:11:14.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.946 "dma_device_type": 2 00:11:14.946 } 00:11:14.946 ], 00:11:14.946 "driver_specific": {} 00:11:14.946 }' 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.946 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:15.204 [2024-07-23 06:24:27.718958] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.204 [2024-07-23 06:24:27.718986] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.204 [2024-07-23 06:24:27.719001] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.463 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.721 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.721 "name": "Existed_Raid", 00:11:15.721 "uuid": "34e552b9-48bc-11ef-a06c-59ddad71024c", 00:11:15.721 "strip_size_kb": 64, 00:11:15.721 "state": "offline", 00:11:15.721 "raid_level": "raid0", 00:11:15.721 "superblock": false, 00:11:15.721 "num_base_bdevs": 3, 00:11:15.721 "num_base_bdevs_discovered": 2, 00:11:15.721 "num_base_bdevs_operational": 2, 00:11:15.721 "base_bdevs_list": [ 00:11:15.721 { 00:11:15.721 "name": null, 00:11:15.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.721 "is_configured": false, 00:11:15.721 "data_offset": 0, 00:11:15.721 "data_size": 65536 00:11:15.721 }, 00:11:15.721 { 00:11:15.721 "name": "BaseBdev2", 00:11:15.721 "uuid": "342026a3-48bc-11ef-a06c-59ddad71024c", 00:11:15.721 "is_configured": true, 00:11:15.721 "data_offset": 0, 00:11:15.721 "data_size": 65536 00:11:15.721 }, 00:11:15.721 { 00:11:15.721 "name": "BaseBdev3", 00:11:15.721 "uuid": "34e54cb9-48bc-11ef-a06c-59ddad71024c", 00:11:15.721 "is_configured": true, 00:11:15.721 "data_offset": 0, 00:11:15.721 "data_size": 65536 00:11:15.721 } 00:11:15.721 ] 00:11:15.721 }' 00:11:15.721 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.721 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.029 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:16.029 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:16.029 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.029 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:16.287 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:16.287 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.287 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:16.546 [2024-07-23 06:24:28.820737] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.546 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:16.546 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:16.546 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.546 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:16.804 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:16.804 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.804 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:16.804 [2024-07-23 06:24:29.306490] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.804 [2024-07-23 06:24:29.306520] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34f176034a00 name Existed_Raid, state offline 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:17.062 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.320 BaseBdev2 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:17.320 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.578 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.837 [ 00:11:17.837 { 00:11:17.837 "name": "BaseBdev2", 00:11:17.837 "aliases": [ 00:11:17.837 "37bb36c6-48bc-11ef-a06c-59ddad71024c" 00:11:17.837 ], 00:11:17.837 "product_name": "Malloc disk", 00:11:17.837 "block_size": 512, 00:11:17.837 "num_blocks": 65536, 00:11:17.837 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:17.837 "assigned_rate_limits": { 00:11:17.837 "rw_ios_per_sec": 0, 00:11:17.837 "rw_mbytes_per_sec": 0, 00:11:17.837 "r_mbytes_per_sec": 0, 00:11:17.837 "w_mbytes_per_sec": 0 00:11:17.837 }, 00:11:17.837 "claimed": false, 00:11:17.837 "zoned": false, 00:11:17.837 "supported_io_types": { 00:11:17.837 "read": true, 00:11:17.837 "write": true, 00:11:17.837 "unmap": true, 00:11:17.837 "flush": true, 00:11:17.837 "reset": true, 00:11:17.837 "nvme_admin": false, 00:11:17.837 "nvme_io": false, 00:11:17.837 "nvme_io_md": false, 00:11:17.837 "write_zeroes": true, 00:11:17.837 "zcopy": true, 00:11:17.837 "get_zone_info": false, 00:11:17.837 "zone_management": false, 00:11:17.837 "zone_append": false, 00:11:17.837 "compare": false, 00:11:17.837 "compare_and_write": false, 00:11:17.837 "abort": true, 00:11:17.837 "seek_hole": false, 00:11:17.837 "seek_data": false, 00:11:17.837 "copy": true, 00:11:17.837 "nvme_iov_md": false 00:11:17.837 }, 00:11:17.837 "memory_domains": [ 00:11:17.837 { 00:11:17.837 "dma_device_id": "system", 00:11:17.837 "dma_device_type": 1 00:11:17.837 }, 00:11:17.837 { 00:11:17.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.837 "dma_device_type": 2 00:11:17.837 } 00:11:17.837 ], 00:11:17.837 "driver_specific": {} 00:11:17.837 } 00:11:17.837 ] 00:11:17.837 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:17.837 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:17.837 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:17.837 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.095 BaseBdev3 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:18.095 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:18.354 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.611 [ 00:11:18.611 { 00:11:18.611 "name": "BaseBdev3", 00:11:18.611 "aliases": [ 00:11:18.611 "382f2ec3-48bc-11ef-a06c-59ddad71024c" 00:11:18.611 ], 00:11:18.611 "product_name": "Malloc disk", 00:11:18.611 "block_size": 512, 00:11:18.611 "num_blocks": 65536, 00:11:18.612 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:18.612 "assigned_rate_limits": { 00:11:18.612 "rw_ios_per_sec": 0, 00:11:18.612 "rw_mbytes_per_sec": 0, 00:11:18.612 "r_mbytes_per_sec": 0, 00:11:18.612 "w_mbytes_per_sec": 0 00:11:18.612 }, 00:11:18.612 "claimed": false, 00:11:18.612 "zoned": false, 00:11:18.612 "supported_io_types": { 00:11:18.612 "read": true, 00:11:18.612 "write": true, 00:11:18.612 "unmap": true, 00:11:18.612 "flush": true, 00:11:18.612 "reset": true, 00:11:18.612 "nvme_admin": false, 00:11:18.612 "nvme_io": false, 00:11:18.612 "nvme_io_md": false, 00:11:18.612 "write_zeroes": true, 00:11:18.612 "zcopy": true, 00:11:18.612 "get_zone_info": false, 00:11:18.612 "zone_management": false, 00:11:18.612 "zone_append": false, 00:11:18.612 "compare": false, 00:11:18.612 "compare_and_write": false, 00:11:18.612 "abort": true, 00:11:18.612 "seek_hole": false, 00:11:18.612 "seek_data": false, 00:11:18.612 "copy": true, 00:11:18.612 "nvme_iov_md": false 00:11:18.612 }, 00:11:18.612 "memory_domains": [ 00:11:18.612 { 00:11:18.612 "dma_device_id": "system", 00:11:18.612 "dma_device_type": 1 00:11:18.612 }, 00:11:18.612 { 00:11:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.612 "dma_device_type": 2 00:11:18.612 } 00:11:18.612 ], 00:11:18.612 "driver_specific": {} 00:11:18.612 } 00:11:18.612 ] 00:11:18.612 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:18.612 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:18.612 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:18.612 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:18.871 [2024-07-23 06:24:31.300262] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.871 [2024-07-23 06:24:31.300315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.871 [2024-07-23 06:24:31.300324] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.871 [2024-07-23 06:24:31.300865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.871 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.150 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.150 "name": "Existed_Raid", 00:11:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.150 "strip_size_kb": 64, 00:11:19.150 "state": "configuring", 00:11:19.150 "raid_level": "raid0", 00:11:19.150 "superblock": false, 00:11:19.150 "num_base_bdevs": 3, 00:11:19.150 "num_base_bdevs_discovered": 2, 00:11:19.150 "num_base_bdevs_operational": 3, 00:11:19.150 "base_bdevs_list": [ 00:11:19.150 { 00:11:19.150 "name": "BaseBdev1", 00:11:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.150 "is_configured": false, 00:11:19.150 "data_offset": 0, 00:11:19.150 "data_size": 0 00:11:19.150 }, 00:11:19.150 { 00:11:19.150 "name": "BaseBdev2", 00:11:19.150 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:19.150 "is_configured": true, 00:11:19.150 "data_offset": 0, 00:11:19.150 "data_size": 65536 00:11:19.150 }, 00:11:19.150 { 00:11:19.150 "name": "BaseBdev3", 00:11:19.150 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:19.150 "is_configured": true, 00:11:19.150 "data_offset": 0, 00:11:19.150 "data_size": 65536 00:11:19.150 } 00:11:19.150 ] 00:11:19.150 }' 00:11:19.150 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.150 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:19.986 [2024-07-23 06:24:32.200281] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.986 "name": "Existed_Raid", 00:11:19.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.986 "strip_size_kb": 64, 00:11:19.986 "state": "configuring", 00:11:19.986 "raid_level": "raid0", 00:11:19.986 "superblock": false, 00:11:19.986 "num_base_bdevs": 3, 00:11:19.986 "num_base_bdevs_discovered": 1, 00:11:19.986 "num_base_bdevs_operational": 3, 00:11:19.986 "base_bdevs_list": [ 00:11:19.986 { 00:11:19.986 "name": "BaseBdev1", 00:11:19.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.986 "is_configured": false, 00:11:19.986 "data_offset": 0, 00:11:19.986 "data_size": 0 00:11:19.986 }, 00:11:19.986 { 00:11:19.986 "name": null, 00:11:19.986 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:19.986 "is_configured": false, 00:11:19.986 "data_offset": 0, 00:11:19.986 "data_size": 65536 00:11:19.986 }, 00:11:19.986 { 00:11:19.986 "name": "BaseBdev3", 00:11:19.986 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:19.986 "is_configured": true, 00:11:19.986 "data_offset": 0, 00:11:19.986 "data_size": 65536 00:11:19.986 } 00:11:19.986 ] 00:11:19.986 }' 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.986 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.553 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.553 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.553 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:20.553 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.811 [2024-07-23 06:24:33.280438] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.811 BaseBdev1 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:20.811 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:21.068 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.325 [ 00:11:21.325 { 00:11:21.325 "name": "BaseBdev1", 00:11:21.325 "aliases": [ 00:11:21.325 "39d0ac7a-48bc-11ef-a06c-59ddad71024c" 00:11:21.325 ], 00:11:21.325 "product_name": "Malloc disk", 00:11:21.325 "block_size": 512, 00:11:21.325 "num_blocks": 65536, 00:11:21.325 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:21.325 "assigned_rate_limits": { 00:11:21.325 "rw_ios_per_sec": 0, 00:11:21.325 "rw_mbytes_per_sec": 0, 00:11:21.326 "r_mbytes_per_sec": 0, 00:11:21.326 "w_mbytes_per_sec": 0 00:11:21.326 }, 00:11:21.326 "claimed": true, 00:11:21.326 "claim_type": "exclusive_write", 00:11:21.326 "zoned": false, 00:11:21.326 "supported_io_types": { 00:11:21.326 "read": true, 00:11:21.326 "write": true, 00:11:21.326 "unmap": true, 00:11:21.326 "flush": true, 00:11:21.326 "reset": true, 00:11:21.326 "nvme_admin": false, 00:11:21.326 "nvme_io": false, 00:11:21.326 "nvme_io_md": false, 00:11:21.326 "write_zeroes": true, 00:11:21.326 "zcopy": true, 00:11:21.326 "get_zone_info": false, 00:11:21.326 "zone_management": false, 00:11:21.326 "zone_append": false, 00:11:21.326 "compare": false, 00:11:21.326 "compare_and_write": false, 00:11:21.326 "abort": true, 00:11:21.326 "seek_hole": false, 00:11:21.326 "seek_data": false, 00:11:21.326 "copy": true, 00:11:21.326 "nvme_iov_md": false 00:11:21.326 }, 00:11:21.326 "memory_domains": [ 00:11:21.326 { 00:11:21.326 "dma_device_id": "system", 00:11:21.326 "dma_device_type": 1 00:11:21.326 }, 00:11:21.326 { 00:11:21.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.326 "dma_device_type": 2 00:11:21.326 } 00:11:21.326 ], 00:11:21.326 "driver_specific": {} 00:11:21.326 } 00:11:21.326 ] 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.585 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.585 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:21.585 "name": "Existed_Raid", 00:11:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.585 "strip_size_kb": 64, 00:11:21.585 "state": "configuring", 00:11:21.585 "raid_level": "raid0", 00:11:21.585 "superblock": false, 00:11:21.585 "num_base_bdevs": 3, 00:11:21.585 "num_base_bdevs_discovered": 2, 00:11:21.585 "num_base_bdevs_operational": 3, 00:11:21.585 "base_bdevs_list": [ 00:11:21.585 { 00:11:21.585 "name": "BaseBdev1", 00:11:21.585 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:21.585 "is_configured": true, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 }, 00:11:21.585 { 00:11:21.585 "name": null, 00:11:21.585 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:21.585 "is_configured": false, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 }, 00:11:21.585 { 00:11:21.585 "name": "BaseBdev3", 00:11:21.585 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:21.585 "is_configured": true, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 } 00:11:21.585 ] 00:11:21.585 }' 00:11:21.585 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:21.585 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.151 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.151 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.409 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:22.409 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:22.666 [2024-07-23 06:24:34.940352] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.666 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.924 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.924 "name": "Existed_Raid", 00:11:22.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.924 "strip_size_kb": 64, 00:11:22.924 "state": "configuring", 00:11:22.924 "raid_level": "raid0", 00:11:22.924 "superblock": false, 00:11:22.924 "num_base_bdevs": 3, 00:11:22.924 "num_base_bdevs_discovered": 1, 00:11:22.924 "num_base_bdevs_operational": 3, 00:11:22.924 "base_bdevs_list": [ 00:11:22.924 { 00:11:22.924 "name": "BaseBdev1", 00:11:22.924 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:22.924 "is_configured": true, 00:11:22.924 "data_offset": 0, 00:11:22.924 "data_size": 65536 00:11:22.924 }, 00:11:22.924 { 00:11:22.924 "name": null, 00:11:22.924 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:22.924 "is_configured": false, 00:11:22.924 "data_offset": 0, 00:11:22.924 "data_size": 65536 00:11:22.924 }, 00:11:22.924 { 00:11:22.924 "name": null, 00:11:22.924 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:22.924 "is_configured": false, 00:11:22.924 "data_offset": 0, 00:11:22.924 "data_size": 65536 00:11:22.924 } 00:11:22.924 ] 00:11:22.924 }' 00:11:22.924 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.924 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.181 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.181 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.439 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:23.439 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:23.697 [2024-07-23 06:24:36.012389] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.697 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.954 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.954 "name": "Existed_Raid", 00:11:23.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.954 "strip_size_kb": 64, 00:11:23.954 "state": "configuring", 00:11:23.954 "raid_level": "raid0", 00:11:23.954 "superblock": false, 00:11:23.954 "num_base_bdevs": 3, 00:11:23.954 "num_base_bdevs_discovered": 2, 00:11:23.954 "num_base_bdevs_operational": 3, 00:11:23.954 "base_bdevs_list": [ 00:11:23.954 { 00:11:23.954 "name": "BaseBdev1", 00:11:23.954 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:23.954 "is_configured": true, 00:11:23.954 "data_offset": 0, 00:11:23.954 "data_size": 65536 00:11:23.954 }, 00:11:23.954 { 00:11:23.954 "name": null, 00:11:23.954 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:23.954 "is_configured": false, 00:11:23.954 "data_offset": 0, 00:11:23.954 "data_size": 65536 00:11:23.954 }, 00:11:23.954 { 00:11:23.954 "name": "BaseBdev3", 00:11:23.954 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:23.954 "is_configured": true, 00:11:23.954 "data_offset": 0, 00:11:23.954 "data_size": 65536 00:11:23.954 } 00:11:23.954 ] 00:11:23.955 }' 00:11:23.955 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.955 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.215 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.215 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.473 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:24.473 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:24.732 [2024-07-23 06:24:37.064417] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.732 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.991 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.991 "name": "Existed_Raid", 00:11:24.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.991 "strip_size_kb": 64, 00:11:24.991 "state": "configuring", 00:11:24.991 "raid_level": "raid0", 00:11:24.991 "superblock": false, 00:11:24.991 "num_base_bdevs": 3, 00:11:24.991 "num_base_bdevs_discovered": 1, 00:11:24.991 "num_base_bdevs_operational": 3, 00:11:24.991 "base_bdevs_list": [ 00:11:24.991 { 00:11:24.991 "name": null, 00:11:24.991 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:24.991 "is_configured": false, 00:11:24.991 "data_offset": 0, 00:11:24.991 "data_size": 65536 00:11:24.991 }, 00:11:24.991 { 00:11:24.991 "name": null, 00:11:24.991 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:24.991 "is_configured": false, 00:11:24.991 "data_offset": 0, 00:11:24.991 "data_size": 65536 00:11:24.991 }, 00:11:24.991 { 00:11:24.991 "name": "BaseBdev3", 00:11:24.991 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:24.991 "is_configured": true, 00:11:24.991 "data_offset": 0, 00:11:24.991 "data_size": 65536 00:11:24.991 } 00:11:24.991 ] 00:11:24.991 }' 00:11:24.991 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.991 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.250 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.250 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.509 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:25.509 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:25.768 [2024-07-23 06:24:38.190182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.768 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.026 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:26.026 "name": "Existed_Raid", 00:11:26.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.026 "strip_size_kb": 64, 00:11:26.026 "state": "configuring", 00:11:26.026 "raid_level": "raid0", 00:11:26.026 "superblock": false, 00:11:26.026 "num_base_bdevs": 3, 00:11:26.026 "num_base_bdevs_discovered": 2, 00:11:26.026 "num_base_bdevs_operational": 3, 00:11:26.026 "base_bdevs_list": [ 00:11:26.026 { 00:11:26.026 "name": null, 00:11:26.026 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:26.026 "is_configured": false, 00:11:26.026 "data_offset": 0, 00:11:26.026 "data_size": 65536 00:11:26.026 }, 00:11:26.026 { 00:11:26.026 "name": "BaseBdev2", 00:11:26.026 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:26.026 "is_configured": true, 00:11:26.026 "data_offset": 0, 00:11:26.026 "data_size": 65536 00:11:26.026 }, 00:11:26.026 { 00:11:26.026 "name": "BaseBdev3", 00:11:26.026 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:26.026 "is_configured": true, 00:11:26.026 "data_offset": 0, 00:11:26.026 "data_size": 65536 00:11:26.026 } 00:11:26.026 ] 00:11:26.026 }' 00:11:26.026 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:26.027 06:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.344 06:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.602 06:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:26.602 06:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.602 06:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:26.861 06:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 39d0ac7a-48bc-11ef-a06c-59ddad71024c 00:11:27.428 [2024-07-23 06:24:39.642366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:27.428 [2024-07-23 06:24:39.642395] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x34f176034a00 00:11:27.428 [2024-07-23 06:24:39.642399] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:27.428 [2024-07-23 06:24:39.642422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34f176097e20 00:11:27.428 [2024-07-23 06:24:39.642493] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34f176034a00 00:11:27.428 [2024-07-23 06:24:39.642498] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34f176034a00 00:11:27.428 [2024-07-23 06:24:39.642535] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.428 NewBaseBdev 00:11:27.428 06:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:27.428 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:27.429 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:27.429 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:27.429 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:27.429 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:27.429 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:27.429 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:27.699 [ 00:11:27.699 { 00:11:27.699 "name": "NewBaseBdev", 00:11:27.699 "aliases": [ 00:11:27.699 "39d0ac7a-48bc-11ef-a06c-59ddad71024c" 00:11:27.699 ], 00:11:27.699 "product_name": "Malloc disk", 00:11:27.699 "block_size": 512, 00:11:27.699 "num_blocks": 65536, 00:11:27.699 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:27.699 "assigned_rate_limits": { 00:11:27.699 "rw_ios_per_sec": 0, 00:11:27.699 "rw_mbytes_per_sec": 0, 00:11:27.699 "r_mbytes_per_sec": 0, 00:11:27.699 "w_mbytes_per_sec": 0 00:11:27.699 }, 00:11:27.699 "claimed": true, 00:11:27.699 "claim_type": "exclusive_write", 00:11:27.699 "zoned": false, 00:11:27.699 "supported_io_types": { 00:11:27.699 "read": true, 00:11:27.699 "write": true, 00:11:27.699 "unmap": true, 00:11:27.699 "flush": true, 00:11:27.699 "reset": true, 00:11:27.699 "nvme_admin": false, 00:11:27.699 "nvme_io": false, 00:11:27.699 "nvme_io_md": false, 00:11:27.699 "write_zeroes": true, 00:11:27.699 "zcopy": true, 00:11:27.699 "get_zone_info": false, 00:11:27.699 "zone_management": false, 00:11:27.699 "zone_append": false, 00:11:27.699 "compare": false, 00:11:27.699 "compare_and_write": false, 00:11:27.699 "abort": true, 00:11:27.699 "seek_hole": false, 00:11:27.699 "seek_data": false, 00:11:27.699 "copy": true, 00:11:27.699 "nvme_iov_md": false 00:11:27.699 }, 00:11:27.699 "memory_domains": [ 00:11:27.699 { 00:11:27.699 "dma_device_id": "system", 00:11:27.699 "dma_device_type": 1 00:11:27.699 }, 00:11:27.699 { 00:11:27.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.699 "dma_device_type": 2 00:11:27.699 } 00:11:27.699 ], 00:11:27.699 "driver_specific": {} 00:11:27.699 } 00:11:27.699 ] 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.699 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.267 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:28.267 "name": "Existed_Raid", 00:11:28.267 "uuid": "3d9b7379-48bc-11ef-a06c-59ddad71024c", 00:11:28.267 "strip_size_kb": 64, 00:11:28.267 "state": "online", 00:11:28.267 "raid_level": "raid0", 00:11:28.267 "superblock": false, 00:11:28.267 "num_base_bdevs": 3, 00:11:28.267 "num_base_bdevs_discovered": 3, 00:11:28.267 "num_base_bdevs_operational": 3, 00:11:28.267 "base_bdevs_list": [ 00:11:28.267 { 00:11:28.267 "name": "NewBaseBdev", 00:11:28.267 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:28.267 "is_configured": true, 00:11:28.267 "data_offset": 0, 00:11:28.267 "data_size": 65536 00:11:28.267 }, 00:11:28.267 { 00:11:28.267 "name": "BaseBdev2", 00:11:28.267 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:28.267 "is_configured": true, 00:11:28.267 "data_offset": 0, 00:11:28.267 "data_size": 65536 00:11:28.267 }, 00:11:28.267 { 00:11:28.267 "name": "BaseBdev3", 00:11:28.267 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:28.267 "is_configured": true, 00:11:28.267 "data_offset": 0, 00:11:28.267 "data_size": 65536 00:11:28.267 } 00:11:28.267 ] 00:11:28.267 }' 00:11:28.267 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:28.267 06:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.526 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:28.527 06:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:28.786 [2024-07-23 06:24:41.058347] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.786 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:28.786 "name": "Existed_Raid", 00:11:28.786 "aliases": [ 00:11:28.786 "3d9b7379-48bc-11ef-a06c-59ddad71024c" 00:11:28.786 ], 00:11:28.786 "product_name": "Raid Volume", 00:11:28.786 "block_size": 512, 00:11:28.786 "num_blocks": 196608, 00:11:28.786 "uuid": "3d9b7379-48bc-11ef-a06c-59ddad71024c", 00:11:28.786 "assigned_rate_limits": { 00:11:28.786 "rw_ios_per_sec": 0, 00:11:28.786 "rw_mbytes_per_sec": 0, 00:11:28.786 "r_mbytes_per_sec": 0, 00:11:28.786 "w_mbytes_per_sec": 0 00:11:28.786 }, 00:11:28.786 "claimed": false, 00:11:28.786 "zoned": false, 00:11:28.786 "supported_io_types": { 00:11:28.786 "read": true, 00:11:28.786 "write": true, 00:11:28.786 "unmap": true, 00:11:28.786 "flush": true, 00:11:28.786 "reset": true, 00:11:28.786 "nvme_admin": false, 00:11:28.786 "nvme_io": false, 00:11:28.786 "nvme_io_md": false, 00:11:28.786 "write_zeroes": true, 00:11:28.786 "zcopy": false, 00:11:28.786 "get_zone_info": false, 00:11:28.787 "zone_management": false, 00:11:28.787 "zone_append": false, 00:11:28.787 "compare": false, 00:11:28.787 "compare_and_write": false, 00:11:28.787 "abort": false, 00:11:28.787 "seek_hole": false, 00:11:28.787 "seek_data": false, 00:11:28.787 "copy": false, 00:11:28.787 "nvme_iov_md": false 00:11:28.787 }, 00:11:28.787 "memory_domains": [ 00:11:28.787 { 00:11:28.787 "dma_device_id": "system", 00:11:28.787 "dma_device_type": 1 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.787 "dma_device_type": 2 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "dma_device_id": "system", 00:11:28.787 "dma_device_type": 1 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.787 "dma_device_type": 2 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "dma_device_id": "system", 00:11:28.787 "dma_device_type": 1 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.787 "dma_device_type": 2 00:11:28.787 } 00:11:28.787 ], 00:11:28.787 "driver_specific": { 00:11:28.787 "raid": { 00:11:28.787 "uuid": "3d9b7379-48bc-11ef-a06c-59ddad71024c", 00:11:28.787 "strip_size_kb": 64, 00:11:28.787 "state": "online", 00:11:28.787 "raid_level": "raid0", 00:11:28.787 "superblock": false, 00:11:28.787 "num_base_bdevs": 3, 00:11:28.787 "num_base_bdevs_discovered": 3, 00:11:28.787 "num_base_bdevs_operational": 3, 00:11:28.787 "base_bdevs_list": [ 00:11:28.787 { 00:11:28.787 "name": "NewBaseBdev", 00:11:28.787 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:28.787 "is_configured": true, 00:11:28.787 "data_offset": 0, 00:11:28.787 "data_size": 65536 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "name": "BaseBdev2", 00:11:28.787 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:28.787 "is_configured": true, 00:11:28.787 "data_offset": 0, 00:11:28.787 "data_size": 65536 00:11:28.787 }, 00:11:28.787 { 00:11:28.787 "name": "BaseBdev3", 00:11:28.787 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:28.787 "is_configured": true, 00:11:28.787 "data_offset": 0, 00:11:28.787 "data_size": 65536 00:11:28.787 } 00:11:28.787 ] 00:11:28.787 } 00:11:28.787 } 00:11:28.787 }' 00:11:28.787 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.787 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:28.787 BaseBdev2 00:11:28.787 BaseBdev3' 00:11:28.787 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.787 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:28.787 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:29.046 "name": "NewBaseBdev", 00:11:29.046 "aliases": [ 00:11:29.046 "39d0ac7a-48bc-11ef-a06c-59ddad71024c" 00:11:29.046 ], 00:11:29.046 "product_name": "Malloc disk", 00:11:29.046 "block_size": 512, 00:11:29.046 "num_blocks": 65536, 00:11:29.046 "uuid": "39d0ac7a-48bc-11ef-a06c-59ddad71024c", 00:11:29.046 "assigned_rate_limits": { 00:11:29.046 "rw_ios_per_sec": 0, 00:11:29.046 "rw_mbytes_per_sec": 0, 00:11:29.046 "r_mbytes_per_sec": 0, 00:11:29.046 "w_mbytes_per_sec": 0 00:11:29.046 }, 00:11:29.046 "claimed": true, 00:11:29.046 "claim_type": "exclusive_write", 00:11:29.046 "zoned": false, 00:11:29.046 "supported_io_types": { 00:11:29.046 "read": true, 00:11:29.046 "write": true, 00:11:29.046 "unmap": true, 00:11:29.046 "flush": true, 00:11:29.046 "reset": true, 00:11:29.046 "nvme_admin": false, 00:11:29.046 "nvme_io": false, 00:11:29.046 "nvme_io_md": false, 00:11:29.046 "write_zeroes": true, 00:11:29.046 "zcopy": true, 00:11:29.046 "get_zone_info": false, 00:11:29.046 "zone_management": false, 00:11:29.046 "zone_append": false, 00:11:29.046 "compare": false, 00:11:29.046 "compare_and_write": false, 00:11:29.046 "abort": true, 00:11:29.046 "seek_hole": false, 00:11:29.046 "seek_data": false, 00:11:29.046 "copy": true, 00:11:29.046 "nvme_iov_md": false 00:11:29.046 }, 00:11:29.046 "memory_domains": [ 00:11:29.046 { 00:11:29.046 "dma_device_id": "system", 00:11:29.046 "dma_device_type": 1 00:11:29.046 }, 00:11:29.046 { 00:11:29.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.046 "dma_device_type": 2 00:11:29.046 } 00:11:29.046 ], 00:11:29.046 "driver_specific": {} 00:11:29.046 }' 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:29.046 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:29.305 "name": "BaseBdev2", 00:11:29.305 "aliases": [ 00:11:29.305 "37bb36c6-48bc-11ef-a06c-59ddad71024c" 00:11:29.305 ], 00:11:29.305 "product_name": "Malloc disk", 00:11:29.305 "block_size": 512, 00:11:29.305 "num_blocks": 65536, 00:11:29.305 "uuid": "37bb36c6-48bc-11ef-a06c-59ddad71024c", 00:11:29.305 "assigned_rate_limits": { 00:11:29.305 "rw_ios_per_sec": 0, 00:11:29.305 "rw_mbytes_per_sec": 0, 00:11:29.305 "r_mbytes_per_sec": 0, 00:11:29.305 "w_mbytes_per_sec": 0 00:11:29.305 }, 00:11:29.305 "claimed": true, 00:11:29.305 "claim_type": "exclusive_write", 00:11:29.305 "zoned": false, 00:11:29.305 "supported_io_types": { 00:11:29.305 "read": true, 00:11:29.305 "write": true, 00:11:29.305 "unmap": true, 00:11:29.305 "flush": true, 00:11:29.305 "reset": true, 00:11:29.305 "nvme_admin": false, 00:11:29.305 "nvme_io": false, 00:11:29.305 "nvme_io_md": false, 00:11:29.305 "write_zeroes": true, 00:11:29.305 "zcopy": true, 00:11:29.305 "get_zone_info": false, 00:11:29.305 "zone_management": false, 00:11:29.305 "zone_append": false, 00:11:29.305 "compare": false, 00:11:29.305 "compare_and_write": false, 00:11:29.305 "abort": true, 00:11:29.305 "seek_hole": false, 00:11:29.305 "seek_data": false, 00:11:29.305 "copy": true, 00:11:29.305 "nvme_iov_md": false 00:11:29.305 }, 00:11:29.305 "memory_domains": [ 00:11:29.305 { 00:11:29.305 "dma_device_id": "system", 00:11:29.305 "dma_device_type": 1 00:11:29.305 }, 00:11:29.305 { 00:11:29.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.305 "dma_device_type": 2 00:11:29.305 } 00:11:29.305 ], 00:11:29.305 "driver_specific": {} 00:11:29.305 }' 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:29.305 06:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:29.564 "name": "BaseBdev3", 00:11:29.564 "aliases": [ 00:11:29.564 "382f2ec3-48bc-11ef-a06c-59ddad71024c" 00:11:29.564 ], 00:11:29.564 "product_name": "Malloc disk", 00:11:29.564 "block_size": 512, 00:11:29.564 "num_blocks": 65536, 00:11:29.564 "uuid": "382f2ec3-48bc-11ef-a06c-59ddad71024c", 00:11:29.564 "assigned_rate_limits": { 00:11:29.564 "rw_ios_per_sec": 0, 00:11:29.564 "rw_mbytes_per_sec": 0, 00:11:29.564 "r_mbytes_per_sec": 0, 00:11:29.564 "w_mbytes_per_sec": 0 00:11:29.564 }, 00:11:29.564 "claimed": true, 00:11:29.564 "claim_type": "exclusive_write", 00:11:29.564 "zoned": false, 00:11:29.564 "supported_io_types": { 00:11:29.564 "read": true, 00:11:29.564 "write": true, 00:11:29.564 "unmap": true, 00:11:29.564 "flush": true, 00:11:29.564 "reset": true, 00:11:29.564 "nvme_admin": false, 00:11:29.564 "nvme_io": false, 00:11:29.564 "nvme_io_md": false, 00:11:29.564 "write_zeroes": true, 00:11:29.564 "zcopy": true, 00:11:29.564 "get_zone_info": false, 00:11:29.564 "zone_management": false, 00:11:29.564 "zone_append": false, 00:11:29.564 "compare": false, 00:11:29.564 "compare_and_write": false, 00:11:29.564 "abort": true, 00:11:29.564 "seek_hole": false, 00:11:29.564 "seek_data": false, 00:11:29.564 "copy": true, 00:11:29.564 "nvme_iov_md": false 00:11:29.564 }, 00:11:29.564 "memory_domains": [ 00:11:29.564 { 00:11:29.564 "dma_device_id": "system", 00:11:29.564 "dma_device_type": 1 00:11:29.564 }, 00:11:29.564 { 00:11:29.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.564 "dma_device_type": 2 00:11:29.564 } 00:11:29.564 ], 00:11:29.564 "driver_specific": {} 00:11:29.564 }' 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:29.564 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:29.823 [2024-07-23 06:24:42.342353] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.823 [2024-07-23 06:24:42.342377] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.823 [2024-07-23 06:24:42.342398] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.823 [2024-07-23 06:24:42.342412] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.823 [2024-07-23 06:24:42.342417] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34f176034a00 name Existed_Raid, state offline 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51996 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51996 ']' 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51996 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51996 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:30.082 killing process with pid 51996 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51996' 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51996 00:11:30.082 [2024-07-23 06:24:42.369900] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51996 00:11:30.082 [2024-07-23 06:24:42.388171] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:30.082 00:11:30.082 real 0m24.019s 00:11:30.082 user 0m43.778s 00:11:30.082 sys 0m3.421s 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.082 06:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.082 ************************************ 00:11:30.082 END TEST raid_state_function_test 00:11:30.082 ************************************ 00:11:30.341 06:24:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:30.341 06:24:42 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:30.341 06:24:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:30.341 06:24:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.341 06:24:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.341 ************************************ 00:11:30.341 START TEST raid_state_function_test_sb 00:11:30.341 ************************************ 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52725 00:11:30.341 Process raid pid: 52725 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52725' 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52725 /var/tmp/spdk-raid.sock 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52725 ']' 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.341 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.341 [2024-07-23 06:24:42.640351] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:30.341 [2024-07-23 06:24:42.640570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:30.908 EAL: TSC is not safe to use in SMP mode 00:11:30.908 EAL: TSC is not invariant 00:11:30.908 [2024-07-23 06:24:43.187148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.908 [2024-07-23 06:24:43.271234] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:30.908 [2024-07-23 06:24:43.273386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.908 [2024-07-23 06:24:43.274226] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.908 [2024-07-23 06:24:43.274248] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.167 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.167 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:31.167 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:31.735 [2024-07-23 06:24:43.946008] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.735 [2024-07-23 06:24:43.946063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.735 [2024-07-23 06:24:43.946068] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.735 [2024-07-23 06:24:43.946077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.735 [2024-07-23 06:24:43.946081] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.735 [2024-07-23 06:24:43.946088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.735 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.735 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.735 "name": "Existed_Raid", 00:11:31.735 "uuid": "402c20ab-48bc-11ef-a06c-59ddad71024c", 00:11:31.735 "strip_size_kb": 64, 00:11:31.735 "state": "configuring", 00:11:31.735 "raid_level": "raid0", 00:11:31.735 "superblock": true, 00:11:31.735 "num_base_bdevs": 3, 00:11:31.735 "num_base_bdevs_discovered": 0, 00:11:31.735 "num_base_bdevs_operational": 3, 00:11:31.735 "base_bdevs_list": [ 00:11:31.735 { 00:11:31.735 "name": "BaseBdev1", 00:11:31.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.735 "is_configured": false, 00:11:31.735 "data_offset": 0, 00:11:31.735 "data_size": 0 00:11:31.735 }, 00:11:31.735 { 00:11:31.735 "name": "BaseBdev2", 00:11:31.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.735 "is_configured": false, 00:11:31.735 "data_offset": 0, 00:11:31.735 "data_size": 0 00:11:31.735 }, 00:11:31.735 { 00:11:31.735 "name": "BaseBdev3", 00:11:31.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.735 "is_configured": false, 00:11:31.735 "data_offset": 0, 00:11:31.735 "data_size": 0 00:11:31.735 } 00:11:31.735 ] 00:11:31.735 }' 00:11:31.735 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.735 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.301 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:32.301 [2024-07-23 06:24:44.770060] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.301 [2024-07-23 06:24:44.770087] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bbe98c34500 name Existed_Raid, state configuring 00:11:32.301 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:32.559 [2024-07-23 06:24:45.046081] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.559 [2024-07-23 06:24:45.046161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.559 [2024-07-23 06:24:45.046166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.560 [2024-07-23 06:24:45.046175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.560 [2024-07-23 06:24:45.046178] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.560 [2024-07-23 06:24:45.046186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.560 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.819 [2024-07-23 06:24:45.311088] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.819 BaseBdev1 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:32.819 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.077 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.335 [ 00:11:33.335 { 00:11:33.335 "name": "BaseBdev1", 00:11:33.335 "aliases": [ 00:11:33.335 "40fc455f-48bc-11ef-a06c-59ddad71024c" 00:11:33.335 ], 00:11:33.335 "product_name": "Malloc disk", 00:11:33.335 "block_size": 512, 00:11:33.335 "num_blocks": 65536, 00:11:33.335 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:33.335 "assigned_rate_limits": { 00:11:33.335 "rw_ios_per_sec": 0, 00:11:33.335 "rw_mbytes_per_sec": 0, 00:11:33.335 "r_mbytes_per_sec": 0, 00:11:33.335 "w_mbytes_per_sec": 0 00:11:33.335 }, 00:11:33.335 "claimed": true, 00:11:33.335 "claim_type": "exclusive_write", 00:11:33.335 "zoned": false, 00:11:33.335 "supported_io_types": { 00:11:33.335 "read": true, 00:11:33.335 "write": true, 00:11:33.335 "unmap": true, 00:11:33.335 "flush": true, 00:11:33.335 "reset": true, 00:11:33.335 "nvme_admin": false, 00:11:33.335 "nvme_io": false, 00:11:33.335 "nvme_io_md": false, 00:11:33.335 "write_zeroes": true, 00:11:33.335 "zcopy": true, 00:11:33.335 "get_zone_info": false, 00:11:33.335 "zone_management": false, 00:11:33.335 "zone_append": false, 00:11:33.335 "compare": false, 00:11:33.335 "compare_and_write": false, 00:11:33.335 "abort": true, 00:11:33.335 "seek_hole": false, 00:11:33.335 "seek_data": false, 00:11:33.335 "copy": true, 00:11:33.335 "nvme_iov_md": false 00:11:33.335 }, 00:11:33.335 "memory_domains": [ 00:11:33.335 { 00:11:33.335 "dma_device_id": "system", 00:11:33.335 "dma_device_type": 1 00:11:33.335 }, 00:11:33.335 { 00:11:33.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.335 "dma_device_type": 2 00:11:33.335 } 00:11:33.335 ], 00:11:33.335 "driver_specific": {} 00:11:33.335 } 00:11:33.335 ] 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.335 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.593 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.593 "name": "Existed_Raid", 00:11:33.593 "uuid": "40d3fc57-48bc-11ef-a06c-59ddad71024c", 00:11:33.593 "strip_size_kb": 64, 00:11:33.593 "state": "configuring", 00:11:33.593 "raid_level": "raid0", 00:11:33.593 "superblock": true, 00:11:33.593 "num_base_bdevs": 3, 00:11:33.593 "num_base_bdevs_discovered": 1, 00:11:33.593 "num_base_bdevs_operational": 3, 00:11:33.593 "base_bdevs_list": [ 00:11:33.593 { 00:11:33.593 "name": "BaseBdev1", 00:11:33.593 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:33.593 "is_configured": true, 00:11:33.593 "data_offset": 2048, 00:11:33.593 "data_size": 63488 00:11:33.593 }, 00:11:33.593 { 00:11:33.593 "name": "BaseBdev2", 00:11:33.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.593 "is_configured": false, 00:11:33.593 "data_offset": 0, 00:11:33.593 "data_size": 0 00:11:33.593 }, 00:11:33.593 { 00:11:33.593 "name": "BaseBdev3", 00:11:33.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.593 "is_configured": false, 00:11:33.593 "data_offset": 0, 00:11:33.593 "data_size": 0 00:11:33.593 } 00:11:33.593 ] 00:11:33.593 }' 00:11:33.593 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.593 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.205 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:34.205 [2024-07-23 06:24:46.634137] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.205 [2024-07-23 06:24:46.634178] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bbe98c34500 name Existed_Raid, state configuring 00:11:34.205 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:34.463 [2024-07-23 06:24:46.902188] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.463 [2024-07-23 06:24:46.903024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.463 [2024-07-23 06:24:46.903061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.463 [2024-07-23 06:24:46.903066] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.463 [2024-07-23 06:24:46.903075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.463 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.721 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.721 "name": "Existed_Raid", 00:11:34.721 "uuid": "41ef3458-48bc-11ef-a06c-59ddad71024c", 00:11:34.721 "strip_size_kb": 64, 00:11:34.721 "state": "configuring", 00:11:34.721 "raid_level": "raid0", 00:11:34.721 "superblock": true, 00:11:34.721 "num_base_bdevs": 3, 00:11:34.721 "num_base_bdevs_discovered": 1, 00:11:34.721 "num_base_bdevs_operational": 3, 00:11:34.721 "base_bdevs_list": [ 00:11:34.721 { 00:11:34.721 "name": "BaseBdev1", 00:11:34.721 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:34.721 "is_configured": true, 00:11:34.721 "data_offset": 2048, 00:11:34.721 "data_size": 63488 00:11:34.721 }, 00:11:34.721 { 00:11:34.721 "name": "BaseBdev2", 00:11:34.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.721 "is_configured": false, 00:11:34.721 "data_offset": 0, 00:11:34.721 "data_size": 0 00:11:34.721 }, 00:11:34.721 { 00:11:34.721 "name": "BaseBdev3", 00:11:34.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.721 "is_configured": false, 00:11:34.721 "data_offset": 0, 00:11:34.721 "data_size": 0 00:11:34.721 } 00:11:34.721 ] 00:11:34.721 }' 00:11:34.721 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.721 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.980 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.238 [2024-07-23 06:24:47.730402] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.238 BaseBdev2 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:35.238 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.497 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.063 [ 00:11:36.063 { 00:11:36.063 "name": "BaseBdev2", 00:11:36.063 "aliases": [ 00:11:36.063 "426d8f50-48bc-11ef-a06c-59ddad71024c" 00:11:36.063 ], 00:11:36.063 "product_name": "Malloc disk", 00:11:36.063 "block_size": 512, 00:11:36.063 "num_blocks": 65536, 00:11:36.063 "uuid": "426d8f50-48bc-11ef-a06c-59ddad71024c", 00:11:36.063 "assigned_rate_limits": { 00:11:36.063 "rw_ios_per_sec": 0, 00:11:36.063 "rw_mbytes_per_sec": 0, 00:11:36.063 "r_mbytes_per_sec": 0, 00:11:36.063 "w_mbytes_per_sec": 0 00:11:36.063 }, 00:11:36.063 "claimed": true, 00:11:36.063 "claim_type": "exclusive_write", 00:11:36.063 "zoned": false, 00:11:36.063 "supported_io_types": { 00:11:36.063 "read": true, 00:11:36.063 "write": true, 00:11:36.063 "unmap": true, 00:11:36.063 "flush": true, 00:11:36.063 "reset": true, 00:11:36.063 "nvme_admin": false, 00:11:36.063 "nvme_io": false, 00:11:36.063 "nvme_io_md": false, 00:11:36.063 "write_zeroes": true, 00:11:36.063 "zcopy": true, 00:11:36.063 "get_zone_info": false, 00:11:36.063 "zone_management": false, 00:11:36.063 "zone_append": false, 00:11:36.063 "compare": false, 00:11:36.063 "compare_and_write": false, 00:11:36.063 "abort": true, 00:11:36.063 "seek_hole": false, 00:11:36.063 "seek_data": false, 00:11:36.063 "copy": true, 00:11:36.063 "nvme_iov_md": false 00:11:36.063 }, 00:11:36.063 "memory_domains": [ 00:11:36.063 { 00:11:36.063 "dma_device_id": "system", 00:11:36.063 "dma_device_type": 1 00:11:36.063 }, 00:11:36.063 { 00:11:36.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.063 "dma_device_type": 2 00:11:36.063 } 00:11:36.063 ], 00:11:36.063 "driver_specific": {} 00:11:36.063 } 00:11:36.063 ] 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.063 "name": "Existed_Raid", 00:11:36.063 "uuid": "41ef3458-48bc-11ef-a06c-59ddad71024c", 00:11:36.063 "strip_size_kb": 64, 00:11:36.063 "state": "configuring", 00:11:36.063 "raid_level": "raid0", 00:11:36.063 "superblock": true, 00:11:36.063 "num_base_bdevs": 3, 00:11:36.063 "num_base_bdevs_discovered": 2, 00:11:36.063 "num_base_bdevs_operational": 3, 00:11:36.063 "base_bdevs_list": [ 00:11:36.063 { 00:11:36.063 "name": "BaseBdev1", 00:11:36.063 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:36.063 "is_configured": true, 00:11:36.063 "data_offset": 2048, 00:11:36.063 "data_size": 63488 00:11:36.063 }, 00:11:36.063 { 00:11:36.063 "name": "BaseBdev2", 00:11:36.063 "uuid": "426d8f50-48bc-11ef-a06c-59ddad71024c", 00:11:36.063 "is_configured": true, 00:11:36.063 "data_offset": 2048, 00:11:36.063 "data_size": 63488 00:11:36.063 }, 00:11:36.063 { 00:11:36.063 "name": "BaseBdev3", 00:11:36.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.063 "is_configured": false, 00:11:36.063 "data_offset": 0, 00:11:36.063 "data_size": 0 00:11:36.063 } 00:11:36.063 ] 00:11:36.063 }' 00:11:36.063 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.064 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.630 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.888 [2024-07-23 06:24:49.174488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.888 [2024-07-23 06:24:49.174583] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3bbe98c34a00 00:11:36.888 [2024-07-23 06:24:49.174589] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:36.888 [2024-07-23 06:24:49.174610] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3bbe98c97e20 00:11:36.888 [2024-07-23 06:24:49.174660] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3bbe98c34a00 00:11:36.889 [2024-07-23 06:24:49.174665] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3bbe98c34a00 00:11:36.889 [2024-07-23 06:24:49.174685] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.889 BaseBdev3 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:36.889 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:37.147 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.405 [ 00:11:37.405 { 00:11:37.405 "name": "BaseBdev3", 00:11:37.405 "aliases": [ 00:11:37.405 "4349e9c4-48bc-11ef-a06c-59ddad71024c" 00:11:37.405 ], 00:11:37.405 "product_name": "Malloc disk", 00:11:37.405 "block_size": 512, 00:11:37.405 "num_blocks": 65536, 00:11:37.405 "uuid": "4349e9c4-48bc-11ef-a06c-59ddad71024c", 00:11:37.405 "assigned_rate_limits": { 00:11:37.405 "rw_ios_per_sec": 0, 00:11:37.405 "rw_mbytes_per_sec": 0, 00:11:37.405 "r_mbytes_per_sec": 0, 00:11:37.405 "w_mbytes_per_sec": 0 00:11:37.405 }, 00:11:37.405 "claimed": true, 00:11:37.405 "claim_type": "exclusive_write", 00:11:37.405 "zoned": false, 00:11:37.405 "supported_io_types": { 00:11:37.405 "read": true, 00:11:37.405 "write": true, 00:11:37.405 "unmap": true, 00:11:37.405 "flush": true, 00:11:37.405 "reset": true, 00:11:37.405 "nvme_admin": false, 00:11:37.405 "nvme_io": false, 00:11:37.405 "nvme_io_md": false, 00:11:37.405 "write_zeroes": true, 00:11:37.405 "zcopy": true, 00:11:37.405 "get_zone_info": false, 00:11:37.405 "zone_management": false, 00:11:37.405 "zone_append": false, 00:11:37.405 "compare": false, 00:11:37.405 "compare_and_write": false, 00:11:37.405 "abort": true, 00:11:37.405 "seek_hole": false, 00:11:37.405 "seek_data": false, 00:11:37.405 "copy": true, 00:11:37.405 "nvme_iov_md": false 00:11:37.405 }, 00:11:37.405 "memory_domains": [ 00:11:37.405 { 00:11:37.405 "dma_device_id": "system", 00:11:37.405 "dma_device_type": 1 00:11:37.405 }, 00:11:37.405 { 00:11:37.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.405 "dma_device_type": 2 00:11:37.405 } 00:11:37.405 ], 00:11:37.405 "driver_specific": {} 00:11:37.405 } 00:11:37.405 ] 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.405 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.663 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.663 "name": "Existed_Raid", 00:11:37.663 "uuid": "41ef3458-48bc-11ef-a06c-59ddad71024c", 00:11:37.663 "strip_size_kb": 64, 00:11:37.663 "state": "online", 00:11:37.663 "raid_level": "raid0", 00:11:37.663 "superblock": true, 00:11:37.663 "num_base_bdevs": 3, 00:11:37.663 "num_base_bdevs_discovered": 3, 00:11:37.663 "num_base_bdevs_operational": 3, 00:11:37.663 "base_bdevs_list": [ 00:11:37.663 { 00:11:37.663 "name": "BaseBdev1", 00:11:37.663 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:37.663 "is_configured": true, 00:11:37.663 "data_offset": 2048, 00:11:37.663 "data_size": 63488 00:11:37.663 }, 00:11:37.663 { 00:11:37.663 "name": "BaseBdev2", 00:11:37.663 "uuid": "426d8f50-48bc-11ef-a06c-59ddad71024c", 00:11:37.663 "is_configured": true, 00:11:37.663 "data_offset": 2048, 00:11:37.663 "data_size": 63488 00:11:37.663 }, 00:11:37.663 { 00:11:37.663 "name": "BaseBdev3", 00:11:37.663 "uuid": "4349e9c4-48bc-11ef-a06c-59ddad71024c", 00:11:37.663 "is_configured": true, 00:11:37.663 "data_offset": 2048, 00:11:37.663 "data_size": 63488 00:11:37.663 } 00:11:37.663 ] 00:11:37.663 }' 00:11:37.663 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.663 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.921 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:37.921 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:37.921 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:37.921 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:37.922 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:37.922 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:37.922 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:37.922 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:38.180 [2024-07-23 06:24:50.554497] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.180 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:38.180 "name": "Existed_Raid", 00:11:38.180 "aliases": [ 00:11:38.180 "41ef3458-48bc-11ef-a06c-59ddad71024c" 00:11:38.180 ], 00:11:38.180 "product_name": "Raid Volume", 00:11:38.180 "block_size": 512, 00:11:38.180 "num_blocks": 190464, 00:11:38.180 "uuid": "41ef3458-48bc-11ef-a06c-59ddad71024c", 00:11:38.180 "assigned_rate_limits": { 00:11:38.180 "rw_ios_per_sec": 0, 00:11:38.180 "rw_mbytes_per_sec": 0, 00:11:38.180 "r_mbytes_per_sec": 0, 00:11:38.180 "w_mbytes_per_sec": 0 00:11:38.180 }, 00:11:38.180 "claimed": false, 00:11:38.180 "zoned": false, 00:11:38.180 "supported_io_types": { 00:11:38.180 "read": true, 00:11:38.180 "write": true, 00:11:38.180 "unmap": true, 00:11:38.180 "flush": true, 00:11:38.180 "reset": true, 00:11:38.180 "nvme_admin": false, 00:11:38.180 "nvme_io": false, 00:11:38.180 "nvme_io_md": false, 00:11:38.180 "write_zeroes": true, 00:11:38.180 "zcopy": false, 00:11:38.180 "get_zone_info": false, 00:11:38.180 "zone_management": false, 00:11:38.180 "zone_append": false, 00:11:38.180 "compare": false, 00:11:38.180 "compare_and_write": false, 00:11:38.180 "abort": false, 00:11:38.180 "seek_hole": false, 00:11:38.180 "seek_data": false, 00:11:38.180 "copy": false, 00:11:38.180 "nvme_iov_md": false 00:11:38.180 }, 00:11:38.180 "memory_domains": [ 00:11:38.180 { 00:11:38.180 "dma_device_id": "system", 00:11:38.180 "dma_device_type": 1 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.180 "dma_device_type": 2 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "dma_device_id": "system", 00:11:38.180 "dma_device_type": 1 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.180 "dma_device_type": 2 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "dma_device_id": "system", 00:11:38.180 "dma_device_type": 1 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.180 "dma_device_type": 2 00:11:38.180 } 00:11:38.180 ], 00:11:38.180 "driver_specific": { 00:11:38.180 "raid": { 00:11:38.180 "uuid": "41ef3458-48bc-11ef-a06c-59ddad71024c", 00:11:38.180 "strip_size_kb": 64, 00:11:38.180 "state": "online", 00:11:38.180 "raid_level": "raid0", 00:11:38.180 "superblock": true, 00:11:38.180 "num_base_bdevs": 3, 00:11:38.180 "num_base_bdevs_discovered": 3, 00:11:38.180 "num_base_bdevs_operational": 3, 00:11:38.180 "base_bdevs_list": [ 00:11:38.180 { 00:11:38.180 "name": "BaseBdev1", 00:11:38.180 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:38.180 "is_configured": true, 00:11:38.180 "data_offset": 2048, 00:11:38.180 "data_size": 63488 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "name": "BaseBdev2", 00:11:38.180 "uuid": "426d8f50-48bc-11ef-a06c-59ddad71024c", 00:11:38.180 "is_configured": true, 00:11:38.180 "data_offset": 2048, 00:11:38.180 "data_size": 63488 00:11:38.180 }, 00:11:38.180 { 00:11:38.180 "name": "BaseBdev3", 00:11:38.180 "uuid": "4349e9c4-48bc-11ef-a06c-59ddad71024c", 00:11:38.180 "is_configured": true, 00:11:38.180 "data_offset": 2048, 00:11:38.180 "data_size": 63488 00:11:38.180 } 00:11:38.180 ] 00:11:38.180 } 00:11:38.180 } 00:11:38.180 }' 00:11:38.180 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.180 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:38.180 BaseBdev2 00:11:38.180 BaseBdev3' 00:11:38.180 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.180 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:38.180 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:38.439 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:38.439 "name": "BaseBdev1", 00:11:38.439 "aliases": [ 00:11:38.439 "40fc455f-48bc-11ef-a06c-59ddad71024c" 00:11:38.439 ], 00:11:38.439 "product_name": "Malloc disk", 00:11:38.439 "block_size": 512, 00:11:38.439 "num_blocks": 65536, 00:11:38.439 "uuid": "40fc455f-48bc-11ef-a06c-59ddad71024c", 00:11:38.439 "assigned_rate_limits": { 00:11:38.439 "rw_ios_per_sec": 0, 00:11:38.439 "rw_mbytes_per_sec": 0, 00:11:38.439 "r_mbytes_per_sec": 0, 00:11:38.439 "w_mbytes_per_sec": 0 00:11:38.439 }, 00:11:38.439 "claimed": true, 00:11:38.439 "claim_type": "exclusive_write", 00:11:38.439 "zoned": false, 00:11:38.439 "supported_io_types": { 00:11:38.439 "read": true, 00:11:38.439 "write": true, 00:11:38.439 "unmap": true, 00:11:38.439 "flush": true, 00:11:38.439 "reset": true, 00:11:38.439 "nvme_admin": false, 00:11:38.439 "nvme_io": false, 00:11:38.439 "nvme_io_md": false, 00:11:38.439 "write_zeroes": true, 00:11:38.439 "zcopy": true, 00:11:38.439 "get_zone_info": false, 00:11:38.439 "zone_management": false, 00:11:38.439 "zone_append": false, 00:11:38.439 "compare": false, 00:11:38.439 "compare_and_write": false, 00:11:38.439 "abort": true, 00:11:38.439 "seek_hole": false, 00:11:38.440 "seek_data": false, 00:11:38.440 "copy": true, 00:11:38.440 "nvme_iov_md": false 00:11:38.440 }, 00:11:38.440 "memory_domains": [ 00:11:38.440 { 00:11:38.440 "dma_device_id": "system", 00:11:38.440 "dma_device_type": 1 00:11:38.440 }, 00:11:38.440 { 00:11:38.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.440 "dma_device_type": 2 00:11:38.440 } 00:11:38.440 ], 00:11:38.440 "driver_specific": {} 00:11:38.440 }' 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:38.440 06:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:38.698 "name": "BaseBdev2", 00:11:38.698 "aliases": [ 00:11:38.698 "426d8f50-48bc-11ef-a06c-59ddad71024c" 00:11:38.698 ], 00:11:38.698 "product_name": "Malloc disk", 00:11:38.698 "block_size": 512, 00:11:38.698 "num_blocks": 65536, 00:11:38.698 "uuid": "426d8f50-48bc-11ef-a06c-59ddad71024c", 00:11:38.698 "assigned_rate_limits": { 00:11:38.698 "rw_ios_per_sec": 0, 00:11:38.698 "rw_mbytes_per_sec": 0, 00:11:38.698 "r_mbytes_per_sec": 0, 00:11:38.698 "w_mbytes_per_sec": 0 00:11:38.698 }, 00:11:38.698 "claimed": true, 00:11:38.698 "claim_type": "exclusive_write", 00:11:38.698 "zoned": false, 00:11:38.698 "supported_io_types": { 00:11:38.698 "read": true, 00:11:38.698 "write": true, 00:11:38.698 "unmap": true, 00:11:38.698 "flush": true, 00:11:38.698 "reset": true, 00:11:38.698 "nvme_admin": false, 00:11:38.698 "nvme_io": false, 00:11:38.698 "nvme_io_md": false, 00:11:38.698 "write_zeroes": true, 00:11:38.698 "zcopy": true, 00:11:38.698 "get_zone_info": false, 00:11:38.698 "zone_management": false, 00:11:38.698 "zone_append": false, 00:11:38.698 "compare": false, 00:11:38.698 "compare_and_write": false, 00:11:38.698 "abort": true, 00:11:38.698 "seek_hole": false, 00:11:38.698 "seek_data": false, 00:11:38.698 "copy": true, 00:11:38.698 "nvme_iov_md": false 00:11:38.698 }, 00:11:38.698 "memory_domains": [ 00:11:38.698 { 00:11:38.698 "dma_device_id": "system", 00:11:38.698 "dma_device_type": 1 00:11:38.698 }, 00:11:38.698 { 00:11:38.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.698 "dma_device_type": 2 00:11:38.698 } 00:11:38.698 ], 00:11:38.698 "driver_specific": {} 00:11:38.698 }' 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.698 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:38.956 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:39.213 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:39.214 "name": "BaseBdev3", 00:11:39.214 "aliases": [ 00:11:39.214 "4349e9c4-48bc-11ef-a06c-59ddad71024c" 00:11:39.214 ], 00:11:39.214 "product_name": "Malloc disk", 00:11:39.214 "block_size": 512, 00:11:39.214 "num_blocks": 65536, 00:11:39.214 "uuid": "4349e9c4-48bc-11ef-a06c-59ddad71024c", 00:11:39.214 "assigned_rate_limits": { 00:11:39.214 "rw_ios_per_sec": 0, 00:11:39.214 "rw_mbytes_per_sec": 0, 00:11:39.214 "r_mbytes_per_sec": 0, 00:11:39.214 "w_mbytes_per_sec": 0 00:11:39.214 }, 00:11:39.214 "claimed": true, 00:11:39.214 "claim_type": "exclusive_write", 00:11:39.214 "zoned": false, 00:11:39.214 "supported_io_types": { 00:11:39.214 "read": true, 00:11:39.214 "write": true, 00:11:39.214 "unmap": true, 00:11:39.214 "flush": true, 00:11:39.214 "reset": true, 00:11:39.214 "nvme_admin": false, 00:11:39.214 "nvme_io": false, 00:11:39.214 "nvme_io_md": false, 00:11:39.214 "write_zeroes": true, 00:11:39.214 "zcopy": true, 00:11:39.214 "get_zone_info": false, 00:11:39.214 "zone_management": false, 00:11:39.214 "zone_append": false, 00:11:39.214 "compare": false, 00:11:39.214 "compare_and_write": false, 00:11:39.214 "abort": true, 00:11:39.214 "seek_hole": false, 00:11:39.214 "seek_data": false, 00:11:39.214 "copy": true, 00:11:39.214 "nvme_iov_md": false 00:11:39.214 }, 00:11:39.214 "memory_domains": [ 00:11:39.214 { 00:11:39.214 "dma_device_id": "system", 00:11:39.214 "dma_device_type": 1 00:11:39.214 }, 00:11:39.214 { 00:11:39.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.214 "dma_device_type": 2 00:11:39.214 } 00:11:39.214 ], 00:11:39.214 "driver_specific": {} 00:11:39.214 }' 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:39.214 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:39.472 [2024-07-23 06:24:51.918541] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.472 [2024-07-23 06:24:51.918563] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.472 [2024-07-23 06:24:51.918578] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.472 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.730 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.730 "name": "Existed_Raid", 00:11:39.730 "uuid": "41ef3458-48bc-11ef-a06c-59ddad71024c", 00:11:39.730 "strip_size_kb": 64, 00:11:39.730 "state": "offline", 00:11:39.730 "raid_level": "raid0", 00:11:39.730 "superblock": true, 00:11:39.730 "num_base_bdevs": 3, 00:11:39.730 "num_base_bdevs_discovered": 2, 00:11:39.730 "num_base_bdevs_operational": 2, 00:11:39.730 "base_bdevs_list": [ 00:11:39.730 { 00:11:39.730 "name": null, 00:11:39.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.730 "is_configured": false, 00:11:39.730 "data_offset": 2048, 00:11:39.730 "data_size": 63488 00:11:39.730 }, 00:11:39.730 { 00:11:39.730 "name": "BaseBdev2", 00:11:39.730 "uuid": "426d8f50-48bc-11ef-a06c-59ddad71024c", 00:11:39.730 "is_configured": true, 00:11:39.730 "data_offset": 2048, 00:11:39.730 "data_size": 63488 00:11:39.730 }, 00:11:39.730 { 00:11:39.730 "name": "BaseBdev3", 00:11:39.730 "uuid": "4349e9c4-48bc-11ef-a06c-59ddad71024c", 00:11:39.730 "is_configured": true, 00:11:39.730 "data_offset": 2048, 00:11:39.730 "data_size": 63488 00:11:39.730 } 00:11:39.730 ] 00:11:39.730 }' 00:11:39.730 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.730 06:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.297 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:40.297 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:40.297 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.297 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:40.555 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:40.555 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.555 06:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:40.555 [2024-07-23 06:24:53.072374] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.814 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:40.814 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:40.814 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:40.814 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.113 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:41.113 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:41.113 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:41.373 [2024-07-23 06:24:53.634794] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.373 [2024-07-23 06:24:53.634827] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bbe98c34a00 name Existed_Raid, state offline 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:41.373 06:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.631 BaseBdev2 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:41.631 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:41.890 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.149 [ 00:11:42.149 { 00:11:42.149 "name": "BaseBdev2", 00:11:42.149 "aliases": [ 00:11:42.149 "463b7bb6-48bc-11ef-a06c-59ddad71024c" 00:11:42.149 ], 00:11:42.149 "product_name": "Malloc disk", 00:11:42.149 "block_size": 512, 00:11:42.149 "num_blocks": 65536, 00:11:42.149 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:42.149 "assigned_rate_limits": { 00:11:42.149 "rw_ios_per_sec": 0, 00:11:42.149 "rw_mbytes_per_sec": 0, 00:11:42.149 "r_mbytes_per_sec": 0, 00:11:42.149 "w_mbytes_per_sec": 0 00:11:42.149 }, 00:11:42.149 "claimed": false, 00:11:42.149 "zoned": false, 00:11:42.149 "supported_io_types": { 00:11:42.149 "read": true, 00:11:42.149 "write": true, 00:11:42.149 "unmap": true, 00:11:42.149 "flush": true, 00:11:42.149 "reset": true, 00:11:42.149 "nvme_admin": false, 00:11:42.149 "nvme_io": false, 00:11:42.149 "nvme_io_md": false, 00:11:42.149 "write_zeroes": true, 00:11:42.149 "zcopy": true, 00:11:42.149 "get_zone_info": false, 00:11:42.149 "zone_management": false, 00:11:42.149 "zone_append": false, 00:11:42.149 "compare": false, 00:11:42.149 "compare_and_write": false, 00:11:42.149 "abort": true, 00:11:42.149 "seek_hole": false, 00:11:42.149 "seek_data": false, 00:11:42.149 "copy": true, 00:11:42.149 "nvme_iov_md": false 00:11:42.149 }, 00:11:42.149 "memory_domains": [ 00:11:42.149 { 00:11:42.149 "dma_device_id": "system", 00:11:42.149 "dma_device_type": 1 00:11:42.149 }, 00:11:42.149 { 00:11:42.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.149 "dma_device_type": 2 00:11:42.149 } 00:11:42.149 ], 00:11:42.149 "driver_specific": {} 00:11:42.149 } 00:11:42.149 ] 00:11:42.149 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:42.149 06:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:42.149 06:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:42.149 06:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.407 BaseBdev3 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:42.407 06:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:42.666 06:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:42.925 [ 00:11:42.925 { 00:11:42.925 "name": "BaseBdev3", 00:11:42.925 "aliases": [ 00:11:42.925 "46a6e7ec-48bc-11ef-a06c-59ddad71024c" 00:11:42.925 ], 00:11:42.925 "product_name": "Malloc disk", 00:11:42.925 "block_size": 512, 00:11:42.925 "num_blocks": 65536, 00:11:42.925 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:42.925 "assigned_rate_limits": { 00:11:42.925 "rw_ios_per_sec": 0, 00:11:42.925 "rw_mbytes_per_sec": 0, 00:11:42.925 "r_mbytes_per_sec": 0, 00:11:42.925 "w_mbytes_per_sec": 0 00:11:42.925 }, 00:11:42.925 "claimed": false, 00:11:42.925 "zoned": false, 00:11:42.925 "supported_io_types": { 00:11:42.925 "read": true, 00:11:42.925 "write": true, 00:11:42.925 "unmap": true, 00:11:42.925 "flush": true, 00:11:42.925 "reset": true, 00:11:42.925 "nvme_admin": false, 00:11:42.925 "nvme_io": false, 00:11:42.925 "nvme_io_md": false, 00:11:42.925 "write_zeroes": true, 00:11:42.925 "zcopy": true, 00:11:42.925 "get_zone_info": false, 00:11:42.925 "zone_management": false, 00:11:42.925 "zone_append": false, 00:11:42.925 "compare": false, 00:11:42.925 "compare_and_write": false, 00:11:42.925 "abort": true, 00:11:42.925 "seek_hole": false, 00:11:42.925 "seek_data": false, 00:11:42.925 "copy": true, 00:11:42.925 "nvme_iov_md": false 00:11:42.925 }, 00:11:42.925 "memory_domains": [ 00:11:42.925 { 00:11:42.925 "dma_device_id": "system", 00:11:42.925 "dma_device_type": 1 00:11:42.925 }, 00:11:42.925 { 00:11:42.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.925 "dma_device_type": 2 00:11:42.925 } 00:11:42.925 ], 00:11:42.925 "driver_specific": {} 00:11:42.925 } 00:11:42.925 ] 00:11:42.925 06:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:42.925 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:42.925 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:42.925 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:43.184 [2024-07-23 06:24:55.628982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.184 [2024-07-23 06:24:55.629034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.184 [2024-07-23 06:24:55.629044] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.184 [2024-07-23 06:24:55.629607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.184 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.443 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.443 "name": "Existed_Raid", 00:11:43.443 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:43.443 "strip_size_kb": 64, 00:11:43.443 "state": "configuring", 00:11:43.443 "raid_level": "raid0", 00:11:43.443 "superblock": true, 00:11:43.443 "num_base_bdevs": 3, 00:11:43.443 "num_base_bdevs_discovered": 2, 00:11:43.443 "num_base_bdevs_operational": 3, 00:11:43.443 "base_bdevs_list": [ 00:11:43.443 { 00:11:43.443 "name": "BaseBdev1", 00:11:43.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.443 "is_configured": false, 00:11:43.443 "data_offset": 0, 00:11:43.443 "data_size": 0 00:11:43.443 }, 00:11:43.443 { 00:11:43.443 "name": "BaseBdev2", 00:11:43.443 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:43.443 "is_configured": true, 00:11:43.443 "data_offset": 2048, 00:11:43.443 "data_size": 63488 00:11:43.443 }, 00:11:43.443 { 00:11:43.443 "name": "BaseBdev3", 00:11:43.443 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:43.443 "is_configured": true, 00:11:43.443 "data_offset": 2048, 00:11:43.443 "data_size": 63488 00:11:43.443 } 00:11:43.443 ] 00:11:43.443 }' 00:11:43.443 06:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.443 06:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.702 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:43.961 [2024-07-23 06:24:56.477027] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.220 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.479 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.479 "name": "Existed_Raid", 00:11:44.479 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:44.479 "strip_size_kb": 64, 00:11:44.479 "state": "configuring", 00:11:44.479 "raid_level": "raid0", 00:11:44.479 "superblock": true, 00:11:44.479 "num_base_bdevs": 3, 00:11:44.479 "num_base_bdevs_discovered": 1, 00:11:44.479 "num_base_bdevs_operational": 3, 00:11:44.479 "base_bdevs_list": [ 00:11:44.479 { 00:11:44.479 "name": "BaseBdev1", 00:11:44.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.479 "is_configured": false, 00:11:44.479 "data_offset": 0, 00:11:44.479 "data_size": 0 00:11:44.479 }, 00:11:44.479 { 00:11:44.479 "name": null, 00:11:44.479 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:44.479 "is_configured": false, 00:11:44.479 "data_offset": 2048, 00:11:44.479 "data_size": 63488 00:11:44.479 }, 00:11:44.479 { 00:11:44.480 "name": "BaseBdev3", 00:11:44.480 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:44.480 "is_configured": true, 00:11:44.480 "data_offset": 2048, 00:11:44.480 "data_size": 63488 00:11:44.480 } 00:11:44.480 ] 00:11:44.480 }' 00:11:44.480 06:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.480 06:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.738 06:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.738 06:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.996 06:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:44.996 06:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.255 [2024-07-23 06:24:57.521209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.255 BaseBdev1 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:45.255 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:45.514 06:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.776 [ 00:11:45.776 { 00:11:45.776 "name": "BaseBdev1", 00:11:45.776 "aliases": [ 00:11:45.776 "484385d0-48bc-11ef-a06c-59ddad71024c" 00:11:45.776 ], 00:11:45.776 "product_name": "Malloc disk", 00:11:45.776 "block_size": 512, 00:11:45.776 "num_blocks": 65536, 00:11:45.776 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:45.776 "assigned_rate_limits": { 00:11:45.776 "rw_ios_per_sec": 0, 00:11:45.776 "rw_mbytes_per_sec": 0, 00:11:45.776 "r_mbytes_per_sec": 0, 00:11:45.776 "w_mbytes_per_sec": 0 00:11:45.776 }, 00:11:45.776 "claimed": true, 00:11:45.776 "claim_type": "exclusive_write", 00:11:45.776 "zoned": false, 00:11:45.776 "supported_io_types": { 00:11:45.776 "read": true, 00:11:45.776 "write": true, 00:11:45.776 "unmap": true, 00:11:45.776 "flush": true, 00:11:45.776 "reset": true, 00:11:45.776 "nvme_admin": false, 00:11:45.776 "nvme_io": false, 00:11:45.776 "nvme_io_md": false, 00:11:45.776 "write_zeroes": true, 00:11:45.776 "zcopy": true, 00:11:45.776 "get_zone_info": false, 00:11:45.776 "zone_management": false, 00:11:45.776 "zone_append": false, 00:11:45.776 "compare": false, 00:11:45.776 "compare_and_write": false, 00:11:45.776 "abort": true, 00:11:45.776 "seek_hole": false, 00:11:45.776 "seek_data": false, 00:11:45.776 "copy": true, 00:11:45.776 "nvme_iov_md": false 00:11:45.776 }, 00:11:45.776 "memory_domains": [ 00:11:45.777 { 00:11:45.777 "dma_device_id": "system", 00:11:45.777 "dma_device_type": 1 00:11:45.777 }, 00:11:45.777 { 00:11:45.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.777 "dma_device_type": 2 00:11:45.777 } 00:11:45.777 ], 00:11:45.777 "driver_specific": {} 00:11:45.777 } 00:11:45.777 ] 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.777 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.035 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.035 "name": "Existed_Raid", 00:11:46.035 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:46.035 "strip_size_kb": 64, 00:11:46.035 "state": "configuring", 00:11:46.035 "raid_level": "raid0", 00:11:46.035 "superblock": true, 00:11:46.035 "num_base_bdevs": 3, 00:11:46.035 "num_base_bdevs_discovered": 2, 00:11:46.035 "num_base_bdevs_operational": 3, 00:11:46.035 "base_bdevs_list": [ 00:11:46.035 { 00:11:46.036 "name": "BaseBdev1", 00:11:46.036 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:46.036 "is_configured": true, 00:11:46.036 "data_offset": 2048, 00:11:46.036 "data_size": 63488 00:11:46.036 }, 00:11:46.036 { 00:11:46.036 "name": null, 00:11:46.036 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:46.036 "is_configured": false, 00:11:46.036 "data_offset": 2048, 00:11:46.036 "data_size": 63488 00:11:46.036 }, 00:11:46.036 { 00:11:46.036 "name": "BaseBdev3", 00:11:46.036 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:46.036 "is_configured": true, 00:11:46.036 "data_offset": 2048, 00:11:46.036 "data_size": 63488 00:11:46.036 } 00:11:46.036 ] 00:11:46.036 }' 00:11:46.036 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.036 06:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.294 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.294 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.553 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:46.553 06:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:46.811 [2024-07-23 06:24:59.149130] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.811 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.812 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.812 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.812 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.812 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.812 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.070 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.070 "name": "Existed_Raid", 00:11:47.070 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:47.070 "strip_size_kb": 64, 00:11:47.070 "state": "configuring", 00:11:47.070 "raid_level": "raid0", 00:11:47.070 "superblock": true, 00:11:47.070 "num_base_bdevs": 3, 00:11:47.070 "num_base_bdevs_discovered": 1, 00:11:47.070 "num_base_bdevs_operational": 3, 00:11:47.070 "base_bdevs_list": [ 00:11:47.070 { 00:11:47.070 "name": "BaseBdev1", 00:11:47.070 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:47.070 "is_configured": true, 00:11:47.070 "data_offset": 2048, 00:11:47.070 "data_size": 63488 00:11:47.070 }, 00:11:47.070 { 00:11:47.070 "name": null, 00:11:47.070 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:47.070 "is_configured": false, 00:11:47.070 "data_offset": 2048, 00:11:47.070 "data_size": 63488 00:11:47.070 }, 00:11:47.070 { 00:11:47.070 "name": null, 00:11:47.070 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:47.070 "is_configured": false, 00:11:47.070 "data_offset": 2048, 00:11:47.070 "data_size": 63488 00:11:47.070 } 00:11:47.070 ] 00:11:47.070 }' 00:11:47.070 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.070 06:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.637 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.637 06:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.637 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:47.637 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:47.895 [2024-07-23 06:25:00.309186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.895 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.153 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.153 "name": "Existed_Raid", 00:11:48.153 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:48.153 "strip_size_kb": 64, 00:11:48.153 "state": "configuring", 00:11:48.153 "raid_level": "raid0", 00:11:48.153 "superblock": true, 00:11:48.153 "num_base_bdevs": 3, 00:11:48.153 "num_base_bdevs_discovered": 2, 00:11:48.153 "num_base_bdevs_operational": 3, 00:11:48.153 "base_bdevs_list": [ 00:11:48.153 { 00:11:48.153 "name": "BaseBdev1", 00:11:48.153 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:48.153 "is_configured": true, 00:11:48.153 "data_offset": 2048, 00:11:48.153 "data_size": 63488 00:11:48.153 }, 00:11:48.153 { 00:11:48.153 "name": null, 00:11:48.153 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:48.153 "is_configured": false, 00:11:48.153 "data_offset": 2048, 00:11:48.153 "data_size": 63488 00:11:48.153 }, 00:11:48.153 { 00:11:48.153 "name": "BaseBdev3", 00:11:48.153 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:48.153 "is_configured": true, 00:11:48.153 "data_offset": 2048, 00:11:48.153 "data_size": 63488 00:11:48.153 } 00:11:48.153 ] 00:11:48.153 }' 00:11:48.153 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.153 06:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.412 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.412 06:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:48.979 [2024-07-23 06:25:01.441243] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.979 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.237 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.237 "name": "Existed_Raid", 00:11:49.237 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:49.237 "strip_size_kb": 64, 00:11:49.237 "state": "configuring", 00:11:49.237 "raid_level": "raid0", 00:11:49.237 "superblock": true, 00:11:49.237 "num_base_bdevs": 3, 00:11:49.237 "num_base_bdevs_discovered": 1, 00:11:49.237 "num_base_bdevs_operational": 3, 00:11:49.237 "base_bdevs_list": [ 00:11:49.237 { 00:11:49.237 "name": null, 00:11:49.237 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:49.237 "is_configured": false, 00:11:49.237 "data_offset": 2048, 00:11:49.237 "data_size": 63488 00:11:49.237 }, 00:11:49.237 { 00:11:49.237 "name": null, 00:11:49.237 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:49.237 "is_configured": false, 00:11:49.237 "data_offset": 2048, 00:11:49.237 "data_size": 63488 00:11:49.237 }, 00:11:49.237 { 00:11:49.237 "name": "BaseBdev3", 00:11:49.237 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:49.237 "is_configured": true, 00:11:49.237 "data_offset": 2048, 00:11:49.237 "data_size": 63488 00:11:49.237 } 00:11:49.237 ] 00:11:49.237 }' 00:11:49.237 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.237 06:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.494 06:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.494 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:49.751 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:49.751 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.009 [2024-07-23 06:25:02.507497] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.009 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.574 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.574 "name": "Existed_Raid", 00:11:50.574 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:50.574 "strip_size_kb": 64, 00:11:50.574 "state": "configuring", 00:11:50.574 "raid_level": "raid0", 00:11:50.574 "superblock": true, 00:11:50.574 "num_base_bdevs": 3, 00:11:50.574 "num_base_bdevs_discovered": 2, 00:11:50.574 "num_base_bdevs_operational": 3, 00:11:50.574 "base_bdevs_list": [ 00:11:50.574 { 00:11:50.574 "name": null, 00:11:50.574 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:50.574 "is_configured": false, 00:11:50.574 "data_offset": 2048, 00:11:50.574 "data_size": 63488 00:11:50.574 }, 00:11:50.574 { 00:11:50.574 "name": "BaseBdev2", 00:11:50.574 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:50.574 "is_configured": true, 00:11:50.574 "data_offset": 2048, 00:11:50.574 "data_size": 63488 00:11:50.574 }, 00:11:50.574 { 00:11:50.574 "name": "BaseBdev3", 00:11:50.574 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:50.574 "is_configured": true, 00:11:50.575 "data_offset": 2048, 00:11:50.575 "data_size": 63488 00:11:50.575 } 00:11:50.575 ] 00:11:50.575 }' 00:11:50.575 06:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.575 06:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.832 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.832 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.090 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:51.090 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:51.090 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.356 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 484385d0-48bc-11ef-a06c-59ddad71024c 00:11:51.614 [2024-07-23 06:25:03.895667] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:51.614 [2024-07-23 06:25:03.895725] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3bbe98c34a00 00:11:51.614 [2024-07-23 06:25:03.895731] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:51.614 [2024-07-23 06:25:03.895752] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3bbe98c97e20 00:11:51.614 [2024-07-23 06:25:03.895798] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3bbe98c34a00 00:11:51.614 [2024-07-23 06:25:03.895802] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3bbe98c34a00 00:11:51.614 [2024-07-23 06:25:03.895822] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.614 NewBaseBdev 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:51.615 06:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.872 06:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.130 [ 00:11:52.130 { 00:11:52.130 "name": "NewBaseBdev", 00:11:52.130 "aliases": [ 00:11:52.130 "484385d0-48bc-11ef-a06c-59ddad71024c" 00:11:52.130 ], 00:11:52.130 "product_name": "Malloc disk", 00:11:52.130 "block_size": 512, 00:11:52.130 "num_blocks": 65536, 00:11:52.130 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:52.130 "assigned_rate_limits": { 00:11:52.130 "rw_ios_per_sec": 0, 00:11:52.130 "rw_mbytes_per_sec": 0, 00:11:52.130 "r_mbytes_per_sec": 0, 00:11:52.130 "w_mbytes_per_sec": 0 00:11:52.130 }, 00:11:52.130 "claimed": true, 00:11:52.130 "claim_type": "exclusive_write", 00:11:52.130 "zoned": false, 00:11:52.130 "supported_io_types": { 00:11:52.130 "read": true, 00:11:52.130 "write": true, 00:11:52.130 "unmap": true, 00:11:52.130 "flush": true, 00:11:52.130 "reset": true, 00:11:52.130 "nvme_admin": false, 00:11:52.130 "nvme_io": false, 00:11:52.130 "nvme_io_md": false, 00:11:52.130 "write_zeroes": true, 00:11:52.130 "zcopy": true, 00:11:52.130 "get_zone_info": false, 00:11:52.130 "zone_management": false, 00:11:52.130 "zone_append": false, 00:11:52.130 "compare": false, 00:11:52.130 "compare_and_write": false, 00:11:52.130 "abort": true, 00:11:52.130 "seek_hole": false, 00:11:52.130 "seek_data": false, 00:11:52.130 "copy": true, 00:11:52.130 "nvme_iov_md": false 00:11:52.130 }, 00:11:52.130 "memory_domains": [ 00:11:52.130 { 00:11:52.130 "dma_device_id": "system", 00:11:52.130 "dma_device_type": 1 00:11:52.130 }, 00:11:52.130 { 00:11:52.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.130 "dma_device_type": 2 00:11:52.130 } 00:11:52.130 ], 00:11:52.130 "driver_specific": {} 00:11:52.130 } 00:11:52.130 ] 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.130 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.389 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.389 "name": "Existed_Raid", 00:11:52.389 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:52.389 "strip_size_kb": 64, 00:11:52.389 "state": "online", 00:11:52.389 "raid_level": "raid0", 00:11:52.389 "superblock": true, 00:11:52.389 "num_base_bdevs": 3, 00:11:52.389 "num_base_bdevs_discovered": 3, 00:11:52.389 "num_base_bdevs_operational": 3, 00:11:52.389 "base_bdevs_list": [ 00:11:52.389 { 00:11:52.389 "name": "NewBaseBdev", 00:11:52.389 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:52.389 "is_configured": true, 00:11:52.389 "data_offset": 2048, 00:11:52.389 "data_size": 63488 00:11:52.389 }, 00:11:52.389 { 00:11:52.389 "name": "BaseBdev2", 00:11:52.389 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:52.389 "is_configured": true, 00:11:52.389 "data_offset": 2048, 00:11:52.389 "data_size": 63488 00:11:52.389 }, 00:11:52.389 { 00:11:52.389 "name": "BaseBdev3", 00:11:52.389 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:52.389 "is_configured": true, 00:11:52.389 "data_offset": 2048, 00:11:52.389 "data_size": 63488 00:11:52.389 } 00:11:52.389 ] 00:11:52.389 }' 00:11:52.389 06:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.389 06:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:52.647 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:52.905 [2024-07-23 06:25:05.423645] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.164 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:53.164 "name": "Existed_Raid", 00:11:53.164 "aliases": [ 00:11:53.164 "4722cee9-48bc-11ef-a06c-59ddad71024c" 00:11:53.164 ], 00:11:53.164 "product_name": "Raid Volume", 00:11:53.164 "block_size": 512, 00:11:53.164 "num_blocks": 190464, 00:11:53.164 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:53.164 "assigned_rate_limits": { 00:11:53.164 "rw_ios_per_sec": 0, 00:11:53.164 "rw_mbytes_per_sec": 0, 00:11:53.164 "r_mbytes_per_sec": 0, 00:11:53.164 "w_mbytes_per_sec": 0 00:11:53.164 }, 00:11:53.164 "claimed": false, 00:11:53.164 "zoned": false, 00:11:53.164 "supported_io_types": { 00:11:53.164 "read": true, 00:11:53.164 "write": true, 00:11:53.164 "unmap": true, 00:11:53.164 "flush": true, 00:11:53.164 "reset": true, 00:11:53.164 "nvme_admin": false, 00:11:53.164 "nvme_io": false, 00:11:53.164 "nvme_io_md": false, 00:11:53.164 "write_zeroes": true, 00:11:53.164 "zcopy": false, 00:11:53.164 "get_zone_info": false, 00:11:53.164 "zone_management": false, 00:11:53.164 "zone_append": false, 00:11:53.164 "compare": false, 00:11:53.164 "compare_and_write": false, 00:11:53.164 "abort": false, 00:11:53.164 "seek_hole": false, 00:11:53.164 "seek_data": false, 00:11:53.164 "copy": false, 00:11:53.164 "nvme_iov_md": false 00:11:53.164 }, 00:11:53.164 "memory_domains": [ 00:11:53.164 { 00:11:53.164 "dma_device_id": "system", 00:11:53.164 "dma_device_type": 1 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.164 "dma_device_type": 2 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "dma_device_id": "system", 00:11:53.164 "dma_device_type": 1 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.164 "dma_device_type": 2 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "dma_device_id": "system", 00:11:53.164 "dma_device_type": 1 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.164 "dma_device_type": 2 00:11:53.164 } 00:11:53.164 ], 00:11:53.164 "driver_specific": { 00:11:53.164 "raid": { 00:11:53.164 "uuid": "4722cee9-48bc-11ef-a06c-59ddad71024c", 00:11:53.164 "strip_size_kb": 64, 00:11:53.164 "state": "online", 00:11:53.164 "raid_level": "raid0", 00:11:53.164 "superblock": true, 00:11:53.164 "num_base_bdevs": 3, 00:11:53.164 "num_base_bdevs_discovered": 3, 00:11:53.164 "num_base_bdevs_operational": 3, 00:11:53.164 "base_bdevs_list": [ 00:11:53.164 { 00:11:53.164 "name": "NewBaseBdev", 00:11:53.164 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:53.164 "is_configured": true, 00:11:53.164 "data_offset": 2048, 00:11:53.164 "data_size": 63488 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "name": "BaseBdev2", 00:11:53.164 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:53.164 "is_configured": true, 00:11:53.164 "data_offset": 2048, 00:11:53.164 "data_size": 63488 00:11:53.164 }, 00:11:53.164 { 00:11:53.164 "name": "BaseBdev3", 00:11:53.164 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:53.164 "is_configured": true, 00:11:53.164 "data_offset": 2048, 00:11:53.164 "data_size": 63488 00:11:53.164 } 00:11:53.164 ] 00:11:53.164 } 00:11:53.164 } 00:11:53.164 }' 00:11:53.164 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.164 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:53.164 BaseBdev2 00:11:53.164 BaseBdev3' 00:11:53.164 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.164 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:53.164 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:53.423 "name": "NewBaseBdev", 00:11:53.423 "aliases": [ 00:11:53.423 "484385d0-48bc-11ef-a06c-59ddad71024c" 00:11:53.423 ], 00:11:53.423 "product_name": "Malloc disk", 00:11:53.423 "block_size": 512, 00:11:53.423 "num_blocks": 65536, 00:11:53.423 "uuid": "484385d0-48bc-11ef-a06c-59ddad71024c", 00:11:53.423 "assigned_rate_limits": { 00:11:53.423 "rw_ios_per_sec": 0, 00:11:53.423 "rw_mbytes_per_sec": 0, 00:11:53.423 "r_mbytes_per_sec": 0, 00:11:53.423 "w_mbytes_per_sec": 0 00:11:53.423 }, 00:11:53.423 "claimed": true, 00:11:53.423 "claim_type": "exclusive_write", 00:11:53.423 "zoned": false, 00:11:53.423 "supported_io_types": { 00:11:53.423 "read": true, 00:11:53.423 "write": true, 00:11:53.423 "unmap": true, 00:11:53.423 "flush": true, 00:11:53.423 "reset": true, 00:11:53.423 "nvme_admin": false, 00:11:53.423 "nvme_io": false, 00:11:53.423 "nvme_io_md": false, 00:11:53.423 "write_zeroes": true, 00:11:53.423 "zcopy": true, 00:11:53.423 "get_zone_info": false, 00:11:53.423 "zone_management": false, 00:11:53.423 "zone_append": false, 00:11:53.423 "compare": false, 00:11:53.423 "compare_and_write": false, 00:11:53.423 "abort": true, 00:11:53.423 "seek_hole": false, 00:11:53.423 "seek_data": false, 00:11:53.423 "copy": true, 00:11:53.423 "nvme_iov_md": false 00:11:53.423 }, 00:11:53.423 "memory_domains": [ 00:11:53.423 { 00:11:53.423 "dma_device_id": "system", 00:11:53.423 "dma_device_type": 1 00:11:53.423 }, 00:11:53.423 { 00:11:53.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.423 "dma_device_type": 2 00:11:53.423 } 00:11:53.423 ], 00:11:53.423 "driver_specific": {} 00:11:53.423 }' 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:53.423 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.424 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:53.424 06:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:53.682 "name": "BaseBdev2", 00:11:53.682 "aliases": [ 00:11:53.682 "463b7bb6-48bc-11ef-a06c-59ddad71024c" 00:11:53.682 ], 00:11:53.682 "product_name": "Malloc disk", 00:11:53.682 "block_size": 512, 00:11:53.682 "num_blocks": 65536, 00:11:53.682 "uuid": "463b7bb6-48bc-11ef-a06c-59ddad71024c", 00:11:53.682 "assigned_rate_limits": { 00:11:53.682 "rw_ios_per_sec": 0, 00:11:53.682 "rw_mbytes_per_sec": 0, 00:11:53.682 "r_mbytes_per_sec": 0, 00:11:53.682 "w_mbytes_per_sec": 0 00:11:53.682 }, 00:11:53.682 "claimed": true, 00:11:53.682 "claim_type": "exclusive_write", 00:11:53.682 "zoned": false, 00:11:53.682 "supported_io_types": { 00:11:53.682 "read": true, 00:11:53.682 "write": true, 00:11:53.682 "unmap": true, 00:11:53.682 "flush": true, 00:11:53.682 "reset": true, 00:11:53.682 "nvme_admin": false, 00:11:53.682 "nvme_io": false, 00:11:53.682 "nvme_io_md": false, 00:11:53.682 "write_zeroes": true, 00:11:53.682 "zcopy": true, 00:11:53.682 "get_zone_info": false, 00:11:53.682 "zone_management": false, 00:11:53.682 "zone_append": false, 00:11:53.682 "compare": false, 00:11:53.682 "compare_and_write": false, 00:11:53.682 "abort": true, 00:11:53.682 "seek_hole": false, 00:11:53.682 "seek_data": false, 00:11:53.682 "copy": true, 00:11:53.682 "nvme_iov_md": false 00:11:53.682 }, 00:11:53.682 "memory_domains": [ 00:11:53.682 { 00:11:53.682 "dma_device_id": "system", 00:11:53.682 "dma_device_type": 1 00:11:53.682 }, 00:11:53.682 { 00:11:53.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.682 "dma_device_type": 2 00:11:53.682 } 00:11:53.682 ], 00:11:53.682 "driver_specific": {} 00:11:53.682 }' 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:53.682 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.250 "name": "BaseBdev3", 00:11:54.250 "aliases": [ 00:11:54.250 "46a6e7ec-48bc-11ef-a06c-59ddad71024c" 00:11:54.250 ], 00:11:54.250 "product_name": "Malloc disk", 00:11:54.250 "block_size": 512, 00:11:54.250 "num_blocks": 65536, 00:11:54.250 "uuid": "46a6e7ec-48bc-11ef-a06c-59ddad71024c", 00:11:54.250 "assigned_rate_limits": { 00:11:54.250 "rw_ios_per_sec": 0, 00:11:54.250 "rw_mbytes_per_sec": 0, 00:11:54.250 "r_mbytes_per_sec": 0, 00:11:54.250 "w_mbytes_per_sec": 0 00:11:54.250 }, 00:11:54.250 "claimed": true, 00:11:54.250 "claim_type": "exclusive_write", 00:11:54.250 "zoned": false, 00:11:54.250 "supported_io_types": { 00:11:54.250 "read": true, 00:11:54.250 "write": true, 00:11:54.250 "unmap": true, 00:11:54.250 "flush": true, 00:11:54.250 "reset": true, 00:11:54.250 "nvme_admin": false, 00:11:54.250 "nvme_io": false, 00:11:54.250 "nvme_io_md": false, 00:11:54.250 "write_zeroes": true, 00:11:54.250 "zcopy": true, 00:11:54.250 "get_zone_info": false, 00:11:54.250 "zone_management": false, 00:11:54.250 "zone_append": false, 00:11:54.250 "compare": false, 00:11:54.250 "compare_and_write": false, 00:11:54.250 "abort": true, 00:11:54.250 "seek_hole": false, 00:11:54.250 "seek_data": false, 00:11:54.250 "copy": true, 00:11:54.250 "nvme_iov_md": false 00:11:54.250 }, 00:11:54.250 "memory_domains": [ 00:11:54.250 { 00:11:54.250 "dma_device_id": "system", 00:11:54.250 "dma_device_type": 1 00:11:54.250 }, 00:11:54.250 { 00:11:54.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.250 "dma_device_type": 2 00:11:54.250 } 00:11:54.250 ], 00:11:54.250 "driver_specific": {} 00:11:54.250 }' 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.250 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:54.509 [2024-07-23 06:25:06.799645] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.509 [2024-07-23 06:25:06.799672] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.509 [2024-07-23 06:25:06.799697] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.509 [2024-07-23 06:25:06.799711] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.509 [2024-07-23 06:25:06.799715] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bbe98c34a00 name Existed_Raid, state offline 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52725 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52725 ']' 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52725 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52725 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:54.509 killing process with pid 52725 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52725' 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52725 00:11:54.509 [2024-07-23 06:25:06.828850] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.509 06:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52725 00:11:54.509 [2024-07-23 06:25:06.846486] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.509 06:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:54.509 00:11:54.509 real 0m24.400s 00:11:54.509 user 0m44.729s 00:11:54.509 sys 0m3.244s 00:11:54.509 06:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.509 06:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.509 ************************************ 00:11:54.509 END TEST raid_state_function_test_sb 00:11:54.509 ************************************ 00:11:54.768 06:25:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:54.768 06:25:07 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:54.768 06:25:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:54.768 06:25:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.768 06:25:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.768 ************************************ 00:11:54.768 START TEST raid_superblock_test 00:11:54.768 ************************************ 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53453 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53453 /var/tmp/spdk-raid.sock 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53453 ']' 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.768 06:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.768 [2024-07-23 06:25:07.083781] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:54.768 [2024-07-23 06:25:07.083993] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:55.335 EAL: TSC is not safe to use in SMP mode 00:11:55.335 EAL: TSC is not invariant 00:11:55.335 [2024-07-23 06:25:07.627374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.335 [2024-07-23 06:25:07.735742] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:55.335 [2024-07-23 06:25:07.738415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.335 [2024-07-23 06:25:07.739406] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.335 [2024-07-23 06:25:07.739415] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:55.903 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:56.162 malloc1 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.162 [2024-07-23 06:25:08.658454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.162 [2024-07-23 06:25:08.658526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.162 [2024-07-23 06:25:08.658565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8234780 00:11:56.162 [2024-07-23 06:25:08.658573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.162 [2024-07-23 06:25:08.659511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.162 [2024-07-23 06:25:08.659538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.162 pt1 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.162 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:56.729 malloc2 00:11:56.729 06:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.026 [2024-07-23 06:25:09.262449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.026 [2024-07-23 06:25:09.262516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.026 [2024-07-23 06:25:09.262529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8234c80 00:11:57.026 [2024-07-23 06:25:09.262537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.026 [2024-07-23 06:25:09.263179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.026 [2024-07-23 06:25:09.263206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.026 pt2 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:57.026 malloc3 00:11:57.026 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:57.285 [2024-07-23 06:25:09.762494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:57.285 [2024-07-23 06:25:09.762569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.285 [2024-07-23 06:25:09.762589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8235180 00:11:57.285 [2024-07-23 06:25:09.762604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.285 [2024-07-23 06:25:09.763343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.285 [2024-07-23 06:25:09.763381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:57.285 pt3 00:11:57.285 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:57.285 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:57.285 06:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:57.544 [2024-07-23 06:25:10.034480] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.544 [2024-07-23 06:25:10.035079] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.544 [2024-07-23 06:25:10.035115] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:57.544 [2024-07-23 06:25:10.035194] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3134a8235400 00:11:57.544 [2024-07-23 06:25:10.035204] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:57.544 [2024-07-23 06:25:10.035254] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3134a8297e20 00:11:57.544 [2024-07-23 06:25:10.035369] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3134a8235400 00:11:57.544 [2024-07-23 06:25:10.035378] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3134a8235400 00:11:57.544 [2024-07-23 06:25:10.035421] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.544 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.118 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.118 "name": "raid_bdev1", 00:11:58.118 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:11:58.118 "strip_size_kb": 64, 00:11:58.118 "state": "online", 00:11:58.118 "raid_level": "raid0", 00:11:58.118 "superblock": true, 00:11:58.118 "num_base_bdevs": 3, 00:11:58.118 "num_base_bdevs_discovered": 3, 00:11:58.118 "num_base_bdevs_operational": 3, 00:11:58.118 "base_bdevs_list": [ 00:11:58.118 { 00:11:58.118 "name": "pt1", 00:11:58.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.118 "is_configured": true, 00:11:58.118 "data_offset": 2048, 00:11:58.118 "data_size": 63488 00:11:58.118 }, 00:11:58.118 { 00:11:58.118 "name": "pt2", 00:11:58.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.118 "is_configured": true, 00:11:58.118 "data_offset": 2048, 00:11:58.118 "data_size": 63488 00:11:58.118 }, 00:11:58.118 { 00:11:58.118 "name": "pt3", 00:11:58.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.118 "is_configured": true, 00:11:58.118 "data_offset": 2048, 00:11:58.118 "data_size": 63488 00:11:58.118 } 00:11:58.118 ] 00:11:58.118 }' 00:11:58.118 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.118 06:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:58.377 [2024-07-23 06:25:10.854573] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:58.377 "name": "raid_bdev1", 00:11:58.377 "aliases": [ 00:11:58.377 "4fb8e975-48bc-11ef-a06c-59ddad71024c" 00:11:58.377 ], 00:11:58.377 "product_name": "Raid Volume", 00:11:58.377 "block_size": 512, 00:11:58.377 "num_blocks": 190464, 00:11:58.377 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:11:58.377 "assigned_rate_limits": { 00:11:58.377 "rw_ios_per_sec": 0, 00:11:58.377 "rw_mbytes_per_sec": 0, 00:11:58.377 "r_mbytes_per_sec": 0, 00:11:58.377 "w_mbytes_per_sec": 0 00:11:58.377 }, 00:11:58.377 "claimed": false, 00:11:58.377 "zoned": false, 00:11:58.377 "supported_io_types": { 00:11:58.377 "read": true, 00:11:58.377 "write": true, 00:11:58.377 "unmap": true, 00:11:58.377 "flush": true, 00:11:58.377 "reset": true, 00:11:58.377 "nvme_admin": false, 00:11:58.377 "nvme_io": false, 00:11:58.377 "nvme_io_md": false, 00:11:58.377 "write_zeroes": true, 00:11:58.377 "zcopy": false, 00:11:58.377 "get_zone_info": false, 00:11:58.377 "zone_management": false, 00:11:58.377 "zone_append": false, 00:11:58.377 "compare": false, 00:11:58.377 "compare_and_write": false, 00:11:58.377 "abort": false, 00:11:58.377 "seek_hole": false, 00:11:58.377 "seek_data": false, 00:11:58.377 "copy": false, 00:11:58.377 "nvme_iov_md": false 00:11:58.377 }, 00:11:58.377 "memory_domains": [ 00:11:58.377 { 00:11:58.377 "dma_device_id": "system", 00:11:58.377 "dma_device_type": 1 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.377 "dma_device_type": 2 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "dma_device_id": "system", 00:11:58.377 "dma_device_type": 1 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.377 "dma_device_type": 2 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "dma_device_id": "system", 00:11:58.377 "dma_device_type": 1 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.377 "dma_device_type": 2 00:11:58.377 } 00:11:58.377 ], 00:11:58.377 "driver_specific": { 00:11:58.377 "raid": { 00:11:58.377 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:11:58.377 "strip_size_kb": 64, 00:11:58.377 "state": "online", 00:11:58.377 "raid_level": "raid0", 00:11:58.377 "superblock": true, 00:11:58.377 "num_base_bdevs": 3, 00:11:58.377 "num_base_bdevs_discovered": 3, 00:11:58.377 "num_base_bdevs_operational": 3, 00:11:58.377 "base_bdevs_list": [ 00:11:58.377 { 00:11:58.377 "name": "pt1", 00:11:58.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.377 "is_configured": true, 00:11:58.377 "data_offset": 2048, 00:11:58.377 "data_size": 63488 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "name": "pt2", 00:11:58.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.377 "is_configured": true, 00:11:58.377 "data_offset": 2048, 00:11:58.377 "data_size": 63488 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "name": "pt3", 00:11:58.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.377 "is_configured": true, 00:11:58.377 "data_offset": 2048, 00:11:58.377 "data_size": 63488 00:11:58.377 } 00:11:58.377 ] 00:11:58.377 } 00:11:58.377 } 00:11:58.377 }' 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:58.377 pt2 00:11:58.377 pt3' 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:58.377 06:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:58.646 "name": "pt1", 00:11:58.646 "aliases": [ 00:11:58.646 "00000000-0000-0000-0000-000000000001" 00:11:58.646 ], 00:11:58.646 "product_name": "passthru", 00:11:58.646 "block_size": 512, 00:11:58.646 "num_blocks": 65536, 00:11:58.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.646 "assigned_rate_limits": { 00:11:58.646 "rw_ios_per_sec": 0, 00:11:58.646 "rw_mbytes_per_sec": 0, 00:11:58.646 "r_mbytes_per_sec": 0, 00:11:58.646 "w_mbytes_per_sec": 0 00:11:58.646 }, 00:11:58.646 "claimed": true, 00:11:58.646 "claim_type": "exclusive_write", 00:11:58.646 "zoned": false, 00:11:58.646 "supported_io_types": { 00:11:58.646 "read": true, 00:11:58.646 "write": true, 00:11:58.646 "unmap": true, 00:11:58.646 "flush": true, 00:11:58.646 "reset": true, 00:11:58.646 "nvme_admin": false, 00:11:58.646 "nvme_io": false, 00:11:58.646 "nvme_io_md": false, 00:11:58.646 "write_zeroes": true, 00:11:58.646 "zcopy": true, 00:11:58.646 "get_zone_info": false, 00:11:58.646 "zone_management": false, 00:11:58.646 "zone_append": false, 00:11:58.646 "compare": false, 00:11:58.646 "compare_and_write": false, 00:11:58.646 "abort": true, 00:11:58.646 "seek_hole": false, 00:11:58.646 "seek_data": false, 00:11:58.646 "copy": true, 00:11:58.646 "nvme_iov_md": false 00:11:58.646 }, 00:11:58.646 "memory_domains": [ 00:11:58.646 { 00:11:58.646 "dma_device_id": "system", 00:11:58.646 "dma_device_type": 1 00:11:58.646 }, 00:11:58.646 { 00:11:58.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.646 "dma_device_type": 2 00:11:58.646 } 00:11:58.646 ], 00:11:58.646 "driver_specific": { 00:11:58.646 "passthru": { 00:11:58.646 "name": "pt1", 00:11:58.646 "base_bdev_name": "malloc1" 00:11:58.646 } 00:11:58.646 } 00:11:58.646 }' 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:58.646 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.909 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.909 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:58.909 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:58.909 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:58.909 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:59.168 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:59.168 "name": "pt2", 00:11:59.168 "aliases": [ 00:11:59.168 "00000000-0000-0000-0000-000000000002" 00:11:59.168 ], 00:11:59.168 "product_name": "passthru", 00:11:59.168 "block_size": 512, 00:11:59.168 "num_blocks": 65536, 00:11:59.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.168 "assigned_rate_limits": { 00:11:59.168 "rw_ios_per_sec": 0, 00:11:59.168 "rw_mbytes_per_sec": 0, 00:11:59.168 "r_mbytes_per_sec": 0, 00:11:59.168 "w_mbytes_per_sec": 0 00:11:59.168 }, 00:11:59.168 "claimed": true, 00:11:59.168 "claim_type": "exclusive_write", 00:11:59.168 "zoned": false, 00:11:59.168 "supported_io_types": { 00:11:59.168 "read": true, 00:11:59.168 "write": true, 00:11:59.168 "unmap": true, 00:11:59.168 "flush": true, 00:11:59.168 "reset": true, 00:11:59.168 "nvme_admin": false, 00:11:59.168 "nvme_io": false, 00:11:59.168 "nvme_io_md": false, 00:11:59.168 "write_zeroes": true, 00:11:59.168 "zcopy": true, 00:11:59.168 "get_zone_info": false, 00:11:59.168 "zone_management": false, 00:11:59.168 "zone_append": false, 00:11:59.168 "compare": false, 00:11:59.168 "compare_and_write": false, 00:11:59.168 "abort": true, 00:11:59.168 "seek_hole": false, 00:11:59.168 "seek_data": false, 00:11:59.168 "copy": true, 00:11:59.168 "nvme_iov_md": false 00:11:59.168 }, 00:11:59.168 "memory_domains": [ 00:11:59.169 { 00:11:59.169 "dma_device_id": "system", 00:11:59.169 "dma_device_type": 1 00:11:59.169 }, 00:11:59.169 { 00:11:59.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.169 "dma_device_type": 2 00:11:59.169 } 00:11:59.169 ], 00:11:59.169 "driver_specific": { 00:11:59.169 "passthru": { 00:11:59.169 "name": "pt2", 00:11:59.169 "base_bdev_name": "malloc2" 00:11:59.169 } 00:11:59.169 } 00:11:59.169 }' 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:59.169 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:59.427 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:59.427 "name": "pt3", 00:11:59.427 "aliases": [ 00:11:59.427 "00000000-0000-0000-0000-000000000003" 00:11:59.427 ], 00:11:59.427 "product_name": "passthru", 00:11:59.427 "block_size": 512, 00:11:59.427 "num_blocks": 65536, 00:11:59.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.427 "assigned_rate_limits": { 00:11:59.427 "rw_ios_per_sec": 0, 00:11:59.427 "rw_mbytes_per_sec": 0, 00:11:59.427 "r_mbytes_per_sec": 0, 00:11:59.427 "w_mbytes_per_sec": 0 00:11:59.427 }, 00:11:59.427 "claimed": true, 00:11:59.427 "claim_type": "exclusive_write", 00:11:59.427 "zoned": false, 00:11:59.428 "supported_io_types": { 00:11:59.428 "read": true, 00:11:59.428 "write": true, 00:11:59.428 "unmap": true, 00:11:59.428 "flush": true, 00:11:59.428 "reset": true, 00:11:59.428 "nvme_admin": false, 00:11:59.428 "nvme_io": false, 00:11:59.428 "nvme_io_md": false, 00:11:59.428 "write_zeroes": true, 00:11:59.428 "zcopy": true, 00:11:59.428 "get_zone_info": false, 00:11:59.428 "zone_management": false, 00:11:59.428 "zone_append": false, 00:11:59.428 "compare": false, 00:11:59.428 "compare_and_write": false, 00:11:59.428 "abort": true, 00:11:59.428 "seek_hole": false, 00:11:59.428 "seek_data": false, 00:11:59.428 "copy": true, 00:11:59.428 "nvme_iov_md": false 00:11:59.428 }, 00:11:59.428 "memory_domains": [ 00:11:59.428 { 00:11:59.428 "dma_device_id": "system", 00:11:59.428 "dma_device_type": 1 00:11:59.428 }, 00:11:59.428 { 00:11:59.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.428 "dma_device_type": 2 00:11:59.428 } 00:11:59.428 ], 00:11:59.428 "driver_specific": { 00:11:59.428 "passthru": { 00:11:59.428 "name": "pt3", 00:11:59.428 "base_bdev_name": "malloc3" 00:11:59.428 } 00:11:59.428 } 00:11:59.428 }' 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:59.428 06:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:59.686 [2024-07-23 06:25:12.138576] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.686 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4fb8e975-48bc-11ef-a06c-59ddad71024c 00:11:59.686 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4fb8e975-48bc-11ef-a06c-59ddad71024c ']' 00:11:59.686 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:59.945 [2024-07-23 06:25:12.430550] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.945 [2024-07-23 06:25:12.430577] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.945 [2024-07-23 06:25:12.430599] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.945 [2024-07-23 06:25:12.430613] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.945 [2024-07-23 06:25:12.430617] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3134a8235400 name raid_bdev1, state offline 00:11:59.945 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.945 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:00.203 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:00.203 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:00.203 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.203 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:00.461 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.461 06:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:00.720 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.720 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:00.987 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:00.988 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:01.252 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:01.511 [2024-07-23 06:25:13.950584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:01.511 [2024-07-23 06:25:13.951200] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:01.511 [2024-07-23 06:25:13.951214] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:01.511 [2024-07-23 06:25:13.951234] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:01.511 [2024-07-23 06:25:13.951273] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:01.511 [2024-07-23 06:25:13.951285] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:01.511 [2024-07-23 06:25:13.951294] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.511 [2024-07-23 06:25:13.951298] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3134a8235180 name raid_bdev1, state configuring 00:12:01.511 request: 00:12:01.511 { 00:12:01.511 "name": "raid_bdev1", 00:12:01.511 "raid_level": "raid0", 00:12:01.511 "base_bdevs": [ 00:12:01.511 "malloc1", 00:12:01.511 "malloc2", 00:12:01.511 "malloc3" 00:12:01.511 ], 00:12:01.511 "strip_size_kb": 64, 00:12:01.511 "superblock": false, 00:12:01.511 "method": "bdev_raid_create", 00:12:01.511 "req_id": 1 00:12:01.511 } 00:12:01.511 Got JSON-RPC error response 00:12:01.511 response: 00:12:01.511 { 00:12:01.511 "code": -17, 00:12:01.511 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:01.511 } 00:12:01.511 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:01.511 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:01.511 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:01.511 06:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:01.511 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.511 06:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:01.770 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:01.770 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:01.770 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.029 [2024-07-23 06:25:14.486591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.029 [2024-07-23 06:25:14.486651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.029 [2024-07-23 06:25:14.486664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8234c80 00:12:02.029 [2024-07-23 06:25:14.486681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.029 [2024-07-23 06:25:14.487323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.029 [2024-07-23 06:25:14.487348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.029 [2024-07-23 06:25:14.487373] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:02.029 [2024-07-23 06:25:14.487384] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:02.029 pt1 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.029 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.307 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.307 "name": "raid_bdev1", 00:12:02.307 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:12:02.307 "strip_size_kb": 64, 00:12:02.307 "state": "configuring", 00:12:02.308 "raid_level": "raid0", 00:12:02.308 "superblock": true, 00:12:02.308 "num_base_bdevs": 3, 00:12:02.308 "num_base_bdevs_discovered": 1, 00:12:02.308 "num_base_bdevs_operational": 3, 00:12:02.308 "base_bdevs_list": [ 00:12:02.308 { 00:12:02.308 "name": "pt1", 00:12:02.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.308 "is_configured": true, 00:12:02.308 "data_offset": 2048, 00:12:02.308 "data_size": 63488 00:12:02.308 }, 00:12:02.308 { 00:12:02.308 "name": null, 00:12:02.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.308 "is_configured": false, 00:12:02.308 "data_offset": 2048, 00:12:02.308 "data_size": 63488 00:12:02.308 }, 00:12:02.308 { 00:12:02.308 "name": null, 00:12:02.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.308 "is_configured": false, 00:12:02.308 "data_offset": 2048, 00:12:02.308 "data_size": 63488 00:12:02.308 } 00:12:02.308 ] 00:12:02.308 }' 00:12:02.308 06:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.308 06:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.569 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:02.569 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:02.828 [2024-07-23 06:25:15.254608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:02.828 [2024-07-23 06:25:15.254663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.828 [2024-07-23 06:25:15.254676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8235680 00:12:02.828 [2024-07-23 06:25:15.254684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.828 [2024-07-23 06:25:15.254815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.828 [2024-07-23 06:25:15.254826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:02.828 [2024-07-23 06:25:15.254849] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:02.828 [2024-07-23 06:25:15.254858] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.828 pt2 00:12:02.828 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:03.086 [2024-07-23 06:25:15.498623] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.086 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.345 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.345 "name": "raid_bdev1", 00:12:03.345 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:12:03.345 "strip_size_kb": 64, 00:12:03.345 "state": "configuring", 00:12:03.345 "raid_level": "raid0", 00:12:03.345 "superblock": true, 00:12:03.345 "num_base_bdevs": 3, 00:12:03.345 "num_base_bdevs_discovered": 1, 00:12:03.345 "num_base_bdevs_operational": 3, 00:12:03.345 "base_bdevs_list": [ 00:12:03.345 { 00:12:03.345 "name": "pt1", 00:12:03.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.345 "is_configured": true, 00:12:03.345 "data_offset": 2048, 00:12:03.345 "data_size": 63488 00:12:03.345 }, 00:12:03.345 { 00:12:03.345 "name": null, 00:12:03.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.345 "is_configured": false, 00:12:03.345 "data_offset": 2048, 00:12:03.345 "data_size": 63488 00:12:03.345 }, 00:12:03.345 { 00:12:03.345 "name": null, 00:12:03.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.345 "is_configured": false, 00:12:03.345 "data_offset": 2048, 00:12:03.345 "data_size": 63488 00:12:03.345 } 00:12:03.345 ] 00:12:03.345 }' 00:12:03.345 06:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.345 06:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:03.912 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:03.912 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.912 [2024-07-23 06:25:16.394638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.912 [2024-07-23 06:25:16.394693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.912 [2024-07-23 06:25:16.394706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8235680 00:12:03.912 [2024-07-23 06:25:16.394714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.912 [2024-07-23 06:25:16.394826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.912 [2024-07-23 06:25:16.394837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.912 [2024-07-23 06:25:16.394860] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:03.912 [2024-07-23 06:25:16.394870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.912 pt2 00:12:03.912 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:03.912 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:03.912 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.170 [2024-07-23 06:25:16.638643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.170 [2024-07-23 06:25:16.638697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.170 [2024-07-23 06:25:16.638708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3134a8235400 00:12:04.170 [2024-07-23 06:25:16.638716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.170 [2024-07-23 06:25:16.638826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.170 [2024-07-23 06:25:16.638837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.170 [2024-07-23 06:25:16.638860] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:04.170 [2024-07-23 06:25:16.638868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.170 [2024-07-23 06:25:16.638897] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3134a8234780 00:12:04.170 [2024-07-23 06:25:16.638901] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:04.170 [2024-07-23 06:25:16.638922] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3134a8297e20 00:12:04.170 [2024-07-23 06:25:16.638978] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3134a8234780 00:12:04.170 [2024-07-23 06:25:16.638983] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3134a8234780 00:12:04.170 [2024-07-23 06:25:16.639004] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.170 pt3 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.170 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.429 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.429 "name": "raid_bdev1", 00:12:04.429 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:12:04.429 "strip_size_kb": 64, 00:12:04.429 "state": "online", 00:12:04.429 "raid_level": "raid0", 00:12:04.429 "superblock": true, 00:12:04.429 "num_base_bdevs": 3, 00:12:04.429 "num_base_bdevs_discovered": 3, 00:12:04.429 "num_base_bdevs_operational": 3, 00:12:04.429 "base_bdevs_list": [ 00:12:04.429 { 00:12:04.429 "name": "pt1", 00:12:04.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.429 "is_configured": true, 00:12:04.429 "data_offset": 2048, 00:12:04.429 "data_size": 63488 00:12:04.429 }, 00:12:04.429 { 00:12:04.429 "name": "pt2", 00:12:04.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.429 "is_configured": true, 00:12:04.429 "data_offset": 2048, 00:12:04.429 "data_size": 63488 00:12:04.429 }, 00:12:04.429 { 00:12:04.429 "name": "pt3", 00:12:04.429 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.429 "is_configured": true, 00:12:04.429 "data_offset": 2048, 00:12:04.429 "data_size": 63488 00:12:04.429 } 00:12:04.429 ] 00:12:04.429 }' 00:12:04.429 06:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.429 06:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:04.687 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:04.947 [2024-07-23 06:25:17.450704] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:05.206 "name": "raid_bdev1", 00:12:05.206 "aliases": [ 00:12:05.206 "4fb8e975-48bc-11ef-a06c-59ddad71024c" 00:12:05.206 ], 00:12:05.206 "product_name": "Raid Volume", 00:12:05.206 "block_size": 512, 00:12:05.206 "num_blocks": 190464, 00:12:05.206 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:12:05.206 "assigned_rate_limits": { 00:12:05.206 "rw_ios_per_sec": 0, 00:12:05.206 "rw_mbytes_per_sec": 0, 00:12:05.206 "r_mbytes_per_sec": 0, 00:12:05.206 "w_mbytes_per_sec": 0 00:12:05.206 }, 00:12:05.206 "claimed": false, 00:12:05.206 "zoned": false, 00:12:05.206 "supported_io_types": { 00:12:05.206 "read": true, 00:12:05.206 "write": true, 00:12:05.206 "unmap": true, 00:12:05.206 "flush": true, 00:12:05.206 "reset": true, 00:12:05.206 "nvme_admin": false, 00:12:05.206 "nvme_io": false, 00:12:05.206 "nvme_io_md": false, 00:12:05.206 "write_zeroes": true, 00:12:05.206 "zcopy": false, 00:12:05.206 "get_zone_info": false, 00:12:05.206 "zone_management": false, 00:12:05.206 "zone_append": false, 00:12:05.206 "compare": false, 00:12:05.206 "compare_and_write": false, 00:12:05.206 "abort": false, 00:12:05.206 "seek_hole": false, 00:12:05.206 "seek_data": false, 00:12:05.206 "copy": false, 00:12:05.206 "nvme_iov_md": false 00:12:05.206 }, 00:12:05.206 "memory_domains": [ 00:12:05.206 { 00:12:05.206 "dma_device_id": "system", 00:12:05.206 "dma_device_type": 1 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.206 "dma_device_type": 2 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "dma_device_id": "system", 00:12:05.206 "dma_device_type": 1 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.206 "dma_device_type": 2 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "dma_device_id": "system", 00:12:05.206 "dma_device_type": 1 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.206 "dma_device_type": 2 00:12:05.206 } 00:12:05.206 ], 00:12:05.206 "driver_specific": { 00:12:05.206 "raid": { 00:12:05.206 "uuid": "4fb8e975-48bc-11ef-a06c-59ddad71024c", 00:12:05.206 "strip_size_kb": 64, 00:12:05.206 "state": "online", 00:12:05.206 "raid_level": "raid0", 00:12:05.206 "superblock": true, 00:12:05.206 "num_base_bdevs": 3, 00:12:05.206 "num_base_bdevs_discovered": 3, 00:12:05.206 "num_base_bdevs_operational": 3, 00:12:05.206 "base_bdevs_list": [ 00:12:05.206 { 00:12:05.206 "name": "pt1", 00:12:05.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.206 "is_configured": true, 00:12:05.206 "data_offset": 2048, 00:12:05.206 "data_size": 63488 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "name": "pt2", 00:12:05.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.206 "is_configured": true, 00:12:05.206 "data_offset": 2048, 00:12:05.206 "data_size": 63488 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "name": "pt3", 00:12:05.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.206 "is_configured": true, 00:12:05.206 "data_offset": 2048, 00:12:05.206 "data_size": 63488 00:12:05.206 } 00:12:05.206 ] 00:12:05.206 } 00:12:05.206 } 00:12:05.206 }' 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:05.206 pt2 00:12:05.206 pt3' 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:05.206 "name": "pt1", 00:12:05.206 "aliases": [ 00:12:05.206 "00000000-0000-0000-0000-000000000001" 00:12:05.206 ], 00:12:05.206 "product_name": "passthru", 00:12:05.206 "block_size": 512, 00:12:05.206 "num_blocks": 65536, 00:12:05.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.206 "assigned_rate_limits": { 00:12:05.206 "rw_ios_per_sec": 0, 00:12:05.206 "rw_mbytes_per_sec": 0, 00:12:05.206 "r_mbytes_per_sec": 0, 00:12:05.206 "w_mbytes_per_sec": 0 00:12:05.206 }, 00:12:05.206 "claimed": true, 00:12:05.206 "claim_type": "exclusive_write", 00:12:05.206 "zoned": false, 00:12:05.206 "supported_io_types": { 00:12:05.206 "read": true, 00:12:05.206 "write": true, 00:12:05.206 "unmap": true, 00:12:05.206 "flush": true, 00:12:05.206 "reset": true, 00:12:05.206 "nvme_admin": false, 00:12:05.206 "nvme_io": false, 00:12:05.206 "nvme_io_md": false, 00:12:05.206 "write_zeroes": true, 00:12:05.206 "zcopy": true, 00:12:05.206 "get_zone_info": false, 00:12:05.206 "zone_management": false, 00:12:05.206 "zone_append": false, 00:12:05.206 "compare": false, 00:12:05.206 "compare_and_write": false, 00:12:05.206 "abort": true, 00:12:05.206 "seek_hole": false, 00:12:05.206 "seek_data": false, 00:12:05.206 "copy": true, 00:12:05.206 "nvme_iov_md": false 00:12:05.206 }, 00:12:05.206 "memory_domains": [ 00:12:05.206 { 00:12:05.206 "dma_device_id": "system", 00:12:05.206 "dma_device_type": 1 00:12:05.206 }, 00:12:05.206 { 00:12:05.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.206 "dma_device_type": 2 00:12:05.206 } 00:12:05.206 ], 00:12:05.206 "driver_specific": { 00:12:05.206 "passthru": { 00:12:05.206 "name": "pt1", 00:12:05.206 "base_bdev_name": "malloc1" 00:12:05.206 } 00:12:05.206 } 00:12:05.206 }' 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.206 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:05.465 06:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:05.724 "name": "pt2", 00:12:05.724 "aliases": [ 00:12:05.724 "00000000-0000-0000-0000-000000000002" 00:12:05.724 ], 00:12:05.724 "product_name": "passthru", 00:12:05.724 "block_size": 512, 00:12:05.724 "num_blocks": 65536, 00:12:05.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.724 "assigned_rate_limits": { 00:12:05.724 "rw_ios_per_sec": 0, 00:12:05.724 "rw_mbytes_per_sec": 0, 00:12:05.724 "r_mbytes_per_sec": 0, 00:12:05.724 "w_mbytes_per_sec": 0 00:12:05.724 }, 00:12:05.724 "claimed": true, 00:12:05.724 "claim_type": "exclusive_write", 00:12:05.724 "zoned": false, 00:12:05.724 "supported_io_types": { 00:12:05.724 "read": true, 00:12:05.724 "write": true, 00:12:05.724 "unmap": true, 00:12:05.724 "flush": true, 00:12:05.724 "reset": true, 00:12:05.724 "nvme_admin": false, 00:12:05.724 "nvme_io": false, 00:12:05.724 "nvme_io_md": false, 00:12:05.724 "write_zeroes": true, 00:12:05.724 "zcopy": true, 00:12:05.724 "get_zone_info": false, 00:12:05.724 "zone_management": false, 00:12:05.724 "zone_append": false, 00:12:05.724 "compare": false, 00:12:05.724 "compare_and_write": false, 00:12:05.724 "abort": true, 00:12:05.724 "seek_hole": false, 00:12:05.724 "seek_data": false, 00:12:05.724 "copy": true, 00:12:05.724 "nvme_iov_md": false 00:12:05.724 }, 00:12:05.724 "memory_domains": [ 00:12:05.724 { 00:12:05.724 "dma_device_id": "system", 00:12:05.724 "dma_device_type": 1 00:12:05.724 }, 00:12:05.724 { 00:12:05.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.724 "dma_device_type": 2 00:12:05.724 } 00:12:05.724 ], 00:12:05.724 "driver_specific": { 00:12:05.724 "passthru": { 00:12:05.724 "name": "pt2", 00:12:05.724 "base_bdev_name": "malloc2" 00:12:05.724 } 00:12:05.724 } 00:12:05.724 }' 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:05.724 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:05.983 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:05.983 "name": "pt3", 00:12:05.983 "aliases": [ 00:12:05.983 "00000000-0000-0000-0000-000000000003" 00:12:05.983 ], 00:12:05.983 "product_name": "passthru", 00:12:05.983 "block_size": 512, 00:12:05.983 "num_blocks": 65536, 00:12:05.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.983 "assigned_rate_limits": { 00:12:05.983 "rw_ios_per_sec": 0, 00:12:05.983 "rw_mbytes_per_sec": 0, 00:12:05.983 "r_mbytes_per_sec": 0, 00:12:05.983 "w_mbytes_per_sec": 0 00:12:05.983 }, 00:12:05.983 "claimed": true, 00:12:05.983 "claim_type": "exclusive_write", 00:12:05.983 "zoned": false, 00:12:05.983 "supported_io_types": { 00:12:05.983 "read": true, 00:12:05.984 "write": true, 00:12:05.984 "unmap": true, 00:12:05.984 "flush": true, 00:12:05.984 "reset": true, 00:12:05.984 "nvme_admin": false, 00:12:05.984 "nvme_io": false, 00:12:05.984 "nvme_io_md": false, 00:12:05.984 "write_zeroes": true, 00:12:05.984 "zcopy": true, 00:12:05.984 "get_zone_info": false, 00:12:05.984 "zone_management": false, 00:12:05.984 "zone_append": false, 00:12:05.984 "compare": false, 00:12:05.984 "compare_and_write": false, 00:12:05.984 "abort": true, 00:12:05.984 "seek_hole": false, 00:12:05.984 "seek_data": false, 00:12:05.984 "copy": true, 00:12:05.984 "nvme_iov_md": false 00:12:05.984 }, 00:12:05.984 "memory_domains": [ 00:12:05.984 { 00:12:05.984 "dma_device_id": "system", 00:12:05.984 "dma_device_type": 1 00:12:05.984 }, 00:12:05.984 { 00:12:05.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.984 "dma_device_type": 2 00:12:05.984 } 00:12:05.984 ], 00:12:05.984 "driver_specific": { 00:12:05.984 "passthru": { 00:12:05.984 "name": "pt3", 00:12:05.984 "base_bdev_name": "malloc3" 00:12:05.984 } 00:12:05.984 } 00:12:05.984 }' 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:05.984 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:06.242 [2024-07-23 06:25:18.730835] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4fb8e975-48bc-11ef-a06c-59ddad71024c '!=' 4fb8e975-48bc-11ef-a06c-59ddad71024c ']' 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53453 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53453 ']' 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53453 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53453 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53453' 00:12:06.242 killing process with pid 53453 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53453 00:12:06.242 [2024-07-23 06:25:18.758621] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.242 06:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53453 00:12:06.242 [2024-07-23 06:25:18.758661] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.242 [2024-07-23 06:25:18.758681] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.242 [2024-07-23 06:25:18.758685] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3134a8234780 name raid_bdev1, state offline 00:12:06.500 [2024-07-23 06:25:18.783707] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.758 06:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:06.758 00:12:06.758 real 0m11.962s 00:12:06.758 user 0m21.268s 00:12:06.758 sys 0m1.791s 00:12:06.758 06:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.758 06:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 ************************************ 00:12:06.758 END TEST raid_superblock_test 00:12:06.758 ************************************ 00:12:06.758 06:25:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:06.758 06:25:19 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:06.758 06:25:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:06.758 06:25:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.758 06:25:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 ************************************ 00:12:06.758 START TEST raid_read_error_test 00:12:06.758 ************************************ 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.37ULwStnIV 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53804 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53804 /var/tmp/spdk-raid.sock 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53804 ']' 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.758 06:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 [2024-07-23 06:25:19.086300] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:06.758 [2024-07-23 06:25:19.086530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:07.325 EAL: TSC is not safe to use in SMP mode 00:12:07.325 EAL: TSC is not invariant 00:12:07.325 [2024-07-23 06:25:19.680809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.325 [2024-07-23 06:25:19.780734] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:07.325 [2024-07-23 06:25:19.783365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.325 [2024-07-23 06:25:19.784378] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.325 [2024-07-23 06:25:19.784395] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.928 06:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.928 06:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:07.928 06:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:07.928 06:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:07.928 BaseBdev1_malloc 00:12:07.928 06:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:08.186 true 00:12:08.186 06:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:08.444 [2024-07-23 06:25:20.886109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:08.444 [2024-07-23 06:25:20.886170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.444 [2024-07-23 06:25:20.886217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x356504e34780 00:12:08.444 [2024-07-23 06:25:20.886238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.444 [2024-07-23 06:25:20.886919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.444 [2024-07-23 06:25:20.886947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.444 BaseBdev1 00:12:08.444 06:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:08.444 06:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.703 BaseBdev2_malloc 00:12:08.703 06:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:08.961 true 00:12:08.961 06:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:09.219 [2024-07-23 06:25:21.658128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:09.219 [2024-07-23 06:25:21.658186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.219 [2024-07-23 06:25:21.658223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x356504e34c80 00:12:09.219 [2024-07-23 06:25:21.658238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.219 [2024-07-23 06:25:21.658944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.219 [2024-07-23 06:25:21.658973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.219 BaseBdev2 00:12:09.219 06:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:09.219 06:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:09.477 BaseBdev3_malloc 00:12:09.478 06:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:09.736 true 00:12:09.736 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:09.995 [2024-07-23 06:25:22.454145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:09.995 [2024-07-23 06:25:22.454202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.995 [2024-07-23 06:25:22.454240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x356504e35180 00:12:09.995 [2024-07-23 06:25:22.454255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.995 [2024-07-23 06:25:22.454914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.995 [2024-07-23 06:25:22.454941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:09.995 BaseBdev3 00:12:09.995 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:10.561 [2024-07-23 06:25:22.774165] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.561 [2024-07-23 06:25:22.774744] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.561 [2024-07-23 06:25:22.774770] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.561 [2024-07-23 06:25:22.774829] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x356504e35400 00:12:10.561 [2024-07-23 06:25:22.774835] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:10.561 [2024-07-23 06:25:22.774874] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x356504ea0e20 00:12:10.561 [2024-07-23 06:25:22.774945] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x356504e35400 00:12:10.561 [2024-07-23 06:25:22.774950] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x356504e35400 00:12:10.561 [2024-07-23 06:25:22.774978] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.561 06:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.561 06:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.561 "name": "raid_bdev1", 00:12:10.561 "uuid": "5750d565-48bc-11ef-a06c-59ddad71024c", 00:12:10.561 "strip_size_kb": 64, 00:12:10.561 "state": "online", 00:12:10.561 "raid_level": "raid0", 00:12:10.561 "superblock": true, 00:12:10.561 "num_base_bdevs": 3, 00:12:10.561 "num_base_bdevs_discovered": 3, 00:12:10.561 "num_base_bdevs_operational": 3, 00:12:10.561 "base_bdevs_list": [ 00:12:10.561 { 00:12:10.561 "name": "BaseBdev1", 00:12:10.561 "uuid": "5ba0b6a8-046e-5a52-8298-35b07109097f", 00:12:10.561 "is_configured": true, 00:12:10.561 "data_offset": 2048, 00:12:10.561 "data_size": 63488 00:12:10.561 }, 00:12:10.561 { 00:12:10.561 "name": "BaseBdev2", 00:12:10.561 "uuid": "9b3526c0-30ab-c25b-91f7-e010cb8803d1", 00:12:10.562 "is_configured": true, 00:12:10.562 "data_offset": 2048, 00:12:10.562 "data_size": 63488 00:12:10.562 }, 00:12:10.562 { 00:12:10.562 "name": "BaseBdev3", 00:12:10.562 "uuid": "8506aa09-c628-ee53-8a5b-aa7daa5e4005", 00:12:10.562 "is_configured": true, 00:12:10.562 "data_offset": 2048, 00:12:10.562 "data_size": 63488 00:12:10.562 } 00:12:10.562 ] 00:12:10.562 }' 00:12:10.562 06:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.562 06:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.820 06:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:10.820 06:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:11.078 [2024-07-23 06:25:23.438370] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x356504ea0ec0 00:12:12.014 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.274 06:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.532 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.532 "name": "raid_bdev1", 00:12:12.533 "uuid": "5750d565-48bc-11ef-a06c-59ddad71024c", 00:12:12.533 "strip_size_kb": 64, 00:12:12.533 "state": "online", 00:12:12.533 "raid_level": "raid0", 00:12:12.533 "superblock": true, 00:12:12.533 "num_base_bdevs": 3, 00:12:12.533 "num_base_bdevs_discovered": 3, 00:12:12.533 "num_base_bdevs_operational": 3, 00:12:12.533 "base_bdevs_list": [ 00:12:12.533 { 00:12:12.533 "name": "BaseBdev1", 00:12:12.533 "uuid": "5ba0b6a8-046e-5a52-8298-35b07109097f", 00:12:12.533 "is_configured": true, 00:12:12.533 "data_offset": 2048, 00:12:12.533 "data_size": 63488 00:12:12.533 }, 00:12:12.533 { 00:12:12.533 "name": "BaseBdev2", 00:12:12.533 "uuid": "9b3526c0-30ab-c25b-91f7-e010cb8803d1", 00:12:12.533 "is_configured": true, 00:12:12.533 "data_offset": 2048, 00:12:12.533 "data_size": 63488 00:12:12.533 }, 00:12:12.533 { 00:12:12.533 "name": "BaseBdev3", 00:12:12.533 "uuid": "8506aa09-c628-ee53-8a5b-aa7daa5e4005", 00:12:12.533 "is_configured": true, 00:12:12.533 "data_offset": 2048, 00:12:12.533 "data_size": 63488 00:12:12.533 } 00:12:12.533 ] 00:12:12.533 }' 00:12:12.533 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.533 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:13.359 [2024-07-23 06:25:25.656953] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.359 [2024-07-23 06:25:25.656991] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.359 [2024-07-23 06:25:25.657313] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.359 [2024-07-23 06:25:25.657324] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.359 [2024-07-23 06:25:25.657331] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.359 [2024-07-23 06:25:25.657335] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x356504e35400 name raid_bdev1, state offline 00:12:13.359 0 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53804 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53804 ']' 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53804 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53804 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:13.359 killing process with pid 53804 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53804' 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53804 00:12:13.359 [2024-07-23 06:25:25.686932] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.359 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53804 00:12:13.359 [2024-07-23 06:25:25.706153] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.37ULwStnIV 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:12:13.618 00:12:13.618 real 0m6.814s 00:12:13.618 user 0m10.759s 00:12:13.618 sys 0m1.149s 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.618 06:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.618 ************************************ 00:12:13.618 END TEST raid_read_error_test 00:12:13.618 ************************************ 00:12:13.618 06:25:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:13.618 06:25:25 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:13.618 06:25:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:13.618 06:25:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.618 06:25:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.618 ************************************ 00:12:13.618 START TEST raid_write_error_test 00:12:13.618 ************************************ 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:13.618 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.mCcnsmqFYy 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53939 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53939 /var/tmp/spdk-raid.sock 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53939 ']' 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.619 06:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.619 [2024-07-23 06:25:25.951251] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:13.619 [2024-07-23 06:25:25.951496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:14.185 EAL: TSC is not safe to use in SMP mode 00:12:14.185 EAL: TSC is not invariant 00:12:14.185 [2024-07-23 06:25:26.508383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.185 [2024-07-23 06:25:26.633357] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:14.185 [2024-07-23 06:25:26.636271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.185 [2024-07-23 06:25:26.637453] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.186 [2024-07-23 06:25:26.637474] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.752 06:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.752 06:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:14.752 06:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:14.752 06:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:15.010 BaseBdev1_malloc 00:12:15.010 06:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:15.268 true 00:12:15.268 06:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:15.527 [2024-07-23 06:25:27.805416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:15.527 [2024-07-23 06:25:27.805489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.527 [2024-07-23 06:25:27.805518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x55a24634780 00:12:15.527 [2024-07-23 06:25:27.805537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.527 [2024-07-23 06:25:27.806222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.527 [2024-07-23 06:25:27.806248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:15.527 BaseBdev1 00:12:15.527 06:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:15.527 06:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:15.786 BaseBdev2_malloc 00:12:15.786 06:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:16.044 true 00:12:16.044 06:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:16.044 [2024-07-23 06:25:28.557446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:16.044 [2024-07-23 06:25:28.557501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.044 [2024-07-23 06:25:28.557526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x55a24634c80 00:12:16.044 [2024-07-23 06:25:28.557535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.044 [2024-07-23 06:25:28.558240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.044 [2024-07-23 06:25:28.558262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.044 BaseBdev2 00:12:16.302 06:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:16.302 06:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:16.302 BaseBdev3_malloc 00:12:16.302 06:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:16.869 true 00:12:16.869 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:16.869 [2024-07-23 06:25:29.329490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:16.869 [2024-07-23 06:25:29.329545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.869 [2024-07-23 06:25:29.329579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x55a24635180 00:12:16.869 [2024-07-23 06:25:29.329588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.869 [2024-07-23 06:25:29.330273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.869 [2024-07-23 06:25:29.330299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:16.869 BaseBdev3 00:12:16.869 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:17.127 [2024-07-23 06:25:29.629518] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.127 [2024-07-23 06:25:29.630158] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.127 [2024-07-23 06:25:29.630183] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.127 [2024-07-23 06:25:29.630242] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x55a24635400 00:12:17.127 [2024-07-23 06:25:29.630248] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:17.127 [2024-07-23 06:25:29.630286] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x55a246a0e20 00:12:17.127 [2024-07-23 06:25:29.630360] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x55a24635400 00:12:17.127 [2024-07-23 06:25:29.630365] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x55a24635400 00:12:17.127 [2024-07-23 06:25:29.630394] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.386 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.683 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.683 "name": "raid_bdev1", 00:12:17.683 "uuid": "5b66e0b2-48bc-11ef-a06c-59ddad71024c", 00:12:17.683 "strip_size_kb": 64, 00:12:17.683 "state": "online", 00:12:17.683 "raid_level": "raid0", 00:12:17.683 "superblock": true, 00:12:17.683 "num_base_bdevs": 3, 00:12:17.683 "num_base_bdevs_discovered": 3, 00:12:17.683 "num_base_bdevs_operational": 3, 00:12:17.683 "base_bdevs_list": [ 00:12:17.683 { 00:12:17.683 "name": "BaseBdev1", 00:12:17.683 "uuid": "96e5929f-a3cd-3b54-85e7-d3eb7d0d9359", 00:12:17.683 "is_configured": true, 00:12:17.683 "data_offset": 2048, 00:12:17.683 "data_size": 63488 00:12:17.683 }, 00:12:17.683 { 00:12:17.683 "name": "BaseBdev2", 00:12:17.683 "uuid": "d2abad25-14cf-b652-b701-b4a6787af67f", 00:12:17.683 "is_configured": true, 00:12:17.683 "data_offset": 2048, 00:12:17.683 "data_size": 63488 00:12:17.683 }, 00:12:17.683 { 00:12:17.683 "name": "BaseBdev3", 00:12:17.683 "uuid": "2e15626f-f8d9-d65d-ab9b-de29048a590a", 00:12:17.683 "is_configured": true, 00:12:17.683 "data_offset": 2048, 00:12:17.683 "data_size": 63488 00:12:17.683 } 00:12:17.683 ] 00:12:17.683 }' 00:12:17.683 06:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.683 06:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.942 06:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:17.942 06:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:17.942 [2024-07-23 06:25:30.373750] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x55a246a0ec0 00:12:18.879 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.138 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.397 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:19.397 "name": "raid_bdev1", 00:12:19.397 "uuid": "5b66e0b2-48bc-11ef-a06c-59ddad71024c", 00:12:19.397 "strip_size_kb": 64, 00:12:19.397 "state": "online", 00:12:19.397 "raid_level": "raid0", 00:12:19.397 "superblock": true, 00:12:19.397 "num_base_bdevs": 3, 00:12:19.397 "num_base_bdevs_discovered": 3, 00:12:19.397 "num_base_bdevs_operational": 3, 00:12:19.397 "base_bdevs_list": [ 00:12:19.397 { 00:12:19.397 "name": "BaseBdev1", 00:12:19.397 "uuid": "96e5929f-a3cd-3b54-85e7-d3eb7d0d9359", 00:12:19.397 "is_configured": true, 00:12:19.397 "data_offset": 2048, 00:12:19.397 "data_size": 63488 00:12:19.397 }, 00:12:19.397 { 00:12:19.397 "name": "BaseBdev2", 00:12:19.397 "uuid": "d2abad25-14cf-b652-b701-b4a6787af67f", 00:12:19.397 "is_configured": true, 00:12:19.397 "data_offset": 2048, 00:12:19.397 "data_size": 63488 00:12:19.397 }, 00:12:19.397 { 00:12:19.397 "name": "BaseBdev3", 00:12:19.397 "uuid": "2e15626f-f8d9-d65d-ab9b-de29048a590a", 00:12:19.397 "is_configured": true, 00:12:19.397 "data_offset": 2048, 00:12:19.397 "data_size": 63488 00:12:19.397 } 00:12:19.397 ] 00:12:19.397 }' 00:12:19.397 06:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:19.397 06:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:19.976 [2024-07-23 06:25:32.443763] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.976 [2024-07-23 06:25:32.443795] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.976 [2024-07-23 06:25:32.444128] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.976 [2024-07-23 06:25:32.444148] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.976 [2024-07-23 06:25:32.444156] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.976 [2024-07-23 06:25:32.444161] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x55a24635400 name raid_bdev1, state offline 00:12:19.976 0 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53939 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53939 ']' 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53939 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53939 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:19.976 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:19.976 killing process with pid 53939 00:12:19.977 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:19.977 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53939' 00:12:19.977 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53939 00:12:19.977 [2024-07-23 06:25:32.471347] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.977 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53939 00:12:19.977 [2024-07-23 06:25:32.488691] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.mCcnsmqFYy 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:12:20.236 00:12:20.236 real 0m6.743s 00:12:20.236 user 0m10.660s 00:12:20.236 sys 0m1.108s 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.236 06:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.236 ************************************ 00:12:20.236 END TEST raid_write_error_test 00:12:20.236 ************************************ 00:12:20.236 06:25:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:20.236 06:25:32 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:20.236 06:25:32 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:20.236 06:25:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:20.236 06:25:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.236 06:25:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.236 ************************************ 00:12:20.236 START TEST raid_state_function_test 00:12:20.236 ************************************ 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=54068 00:12:20.236 Process raid pid: 54068 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54068' 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 54068 /var/tmp/spdk-raid.sock 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 54068 ']' 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.236 06:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.236 [2024-07-23 06:25:32.736499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:20.236 [2024-07-23 06:25:32.736765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:20.803 EAL: TSC is not safe to use in SMP mode 00:12:20.803 EAL: TSC is not invariant 00:12:20.803 [2024-07-23 06:25:33.273166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.061 [2024-07-23 06:25:33.358177] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:21.061 [2024-07-23 06:25:33.360255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.061 [2024-07-23 06:25:33.361035] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.061 [2024-07-23 06:25:33.361048] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.319 06:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.319 06:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:21.319 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:21.578 [2024-07-23 06:25:33.969247] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.578 [2024-07-23 06:25:33.969302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.578 [2024-07-23 06:25:33.969307] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.578 [2024-07-23 06:25:33.969316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.578 [2024-07-23 06:25:33.969320] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.578 [2024-07-23 06:25:33.969327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.578 06:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.837 06:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.837 "name": "Existed_Raid", 00:12:21.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.837 "strip_size_kb": 64, 00:12:21.837 "state": "configuring", 00:12:21.837 "raid_level": "concat", 00:12:21.837 "superblock": false, 00:12:21.837 "num_base_bdevs": 3, 00:12:21.837 "num_base_bdevs_discovered": 0, 00:12:21.837 "num_base_bdevs_operational": 3, 00:12:21.837 "base_bdevs_list": [ 00:12:21.837 { 00:12:21.837 "name": "BaseBdev1", 00:12:21.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.837 "is_configured": false, 00:12:21.837 "data_offset": 0, 00:12:21.837 "data_size": 0 00:12:21.837 }, 00:12:21.837 { 00:12:21.837 "name": "BaseBdev2", 00:12:21.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.837 "is_configured": false, 00:12:21.837 "data_offset": 0, 00:12:21.837 "data_size": 0 00:12:21.837 }, 00:12:21.837 { 00:12:21.837 "name": "BaseBdev3", 00:12:21.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.837 "is_configured": false, 00:12:21.837 "data_offset": 0, 00:12:21.837 "data_size": 0 00:12:21.837 } 00:12:21.837 ] 00:12:21.837 }' 00:12:21.837 06:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.837 06:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.095 06:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:22.404 [2024-07-23 06:25:34.889251] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.404 [2024-07-23 06:25:34.889281] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825f1434500 name Existed_Raid, state configuring 00:12:22.662 06:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:22.662 [2024-07-23 06:25:35.117259] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.662 [2024-07-23 06:25:35.117306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.662 [2024-07-23 06:25:35.117311] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.662 [2024-07-23 06:25:35.117320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.662 [2024-07-23 06:25:35.117323] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.662 [2024-07-23 06:25:35.117330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.662 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.921 [2024-07-23 06:25:35.350344] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.921 BaseBdev1 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:22.921 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:23.178 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.436 [ 00:12:23.436 { 00:12:23.436 "name": "BaseBdev1", 00:12:23.436 "aliases": [ 00:12:23.436 "5ecfa546-48bc-11ef-a06c-59ddad71024c" 00:12:23.436 ], 00:12:23.436 "product_name": "Malloc disk", 00:12:23.436 "block_size": 512, 00:12:23.436 "num_blocks": 65536, 00:12:23.436 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:23.436 "assigned_rate_limits": { 00:12:23.436 "rw_ios_per_sec": 0, 00:12:23.436 "rw_mbytes_per_sec": 0, 00:12:23.436 "r_mbytes_per_sec": 0, 00:12:23.436 "w_mbytes_per_sec": 0 00:12:23.436 }, 00:12:23.436 "claimed": true, 00:12:23.436 "claim_type": "exclusive_write", 00:12:23.436 "zoned": false, 00:12:23.436 "supported_io_types": { 00:12:23.436 "read": true, 00:12:23.436 "write": true, 00:12:23.436 "unmap": true, 00:12:23.436 "flush": true, 00:12:23.436 "reset": true, 00:12:23.436 "nvme_admin": false, 00:12:23.436 "nvme_io": false, 00:12:23.436 "nvme_io_md": false, 00:12:23.436 "write_zeroes": true, 00:12:23.436 "zcopy": true, 00:12:23.436 "get_zone_info": false, 00:12:23.436 "zone_management": false, 00:12:23.436 "zone_append": false, 00:12:23.436 "compare": false, 00:12:23.436 "compare_and_write": false, 00:12:23.436 "abort": true, 00:12:23.436 "seek_hole": false, 00:12:23.436 "seek_data": false, 00:12:23.436 "copy": true, 00:12:23.436 "nvme_iov_md": false 00:12:23.437 }, 00:12:23.437 "memory_domains": [ 00:12:23.437 { 00:12:23.437 "dma_device_id": "system", 00:12:23.437 "dma_device_type": 1 00:12:23.437 }, 00:12:23.437 { 00:12:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.437 "dma_device_type": 2 00:12:23.437 } 00:12:23.437 ], 00:12:23.437 "driver_specific": {} 00:12:23.437 } 00:12:23.437 ] 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.437 06:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.694 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.694 "name": "Existed_Raid", 00:12:23.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.694 "strip_size_kb": 64, 00:12:23.694 "state": "configuring", 00:12:23.694 "raid_level": "concat", 00:12:23.694 "superblock": false, 00:12:23.694 "num_base_bdevs": 3, 00:12:23.694 "num_base_bdevs_discovered": 1, 00:12:23.694 "num_base_bdevs_operational": 3, 00:12:23.694 "base_bdevs_list": [ 00:12:23.694 { 00:12:23.694 "name": "BaseBdev1", 00:12:23.694 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:23.694 "is_configured": true, 00:12:23.694 "data_offset": 0, 00:12:23.694 "data_size": 65536 00:12:23.694 }, 00:12:23.694 { 00:12:23.694 "name": "BaseBdev2", 00:12:23.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.694 "is_configured": false, 00:12:23.694 "data_offset": 0, 00:12:23.694 "data_size": 0 00:12:23.694 }, 00:12:23.694 { 00:12:23.694 "name": "BaseBdev3", 00:12:23.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.694 "is_configured": false, 00:12:23.694 "data_offset": 0, 00:12:23.694 "data_size": 0 00:12:23.694 } 00:12:23.694 ] 00:12:23.694 }' 00:12:23.695 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.695 06:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.952 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:24.212 [2024-07-23 06:25:36.681333] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.212 [2024-07-23 06:25:36.681368] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825f1434500 name Existed_Raid, state configuring 00:12:24.212 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:24.470 [2024-07-23 06:25:36.921371] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.470 [2024-07-23 06:25:36.922271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.470 [2024-07-23 06:25:36.922310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.470 [2024-07-23 06:25:36.922315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:24.470 [2024-07-23 06:25:36.922324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:24.470 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:24.470 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:24.470 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:24.470 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.471 06:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.729 06:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.729 "name": "Existed_Raid", 00:12:24.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.729 "strip_size_kb": 64, 00:12:24.729 "state": "configuring", 00:12:24.729 "raid_level": "concat", 00:12:24.729 "superblock": false, 00:12:24.729 "num_base_bdevs": 3, 00:12:24.729 "num_base_bdevs_discovered": 1, 00:12:24.729 "num_base_bdevs_operational": 3, 00:12:24.729 "base_bdevs_list": [ 00:12:24.729 { 00:12:24.729 "name": "BaseBdev1", 00:12:24.729 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:24.729 "is_configured": true, 00:12:24.729 "data_offset": 0, 00:12:24.729 "data_size": 65536 00:12:24.729 }, 00:12:24.729 { 00:12:24.729 "name": "BaseBdev2", 00:12:24.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.729 "is_configured": false, 00:12:24.729 "data_offset": 0, 00:12:24.729 "data_size": 0 00:12:24.729 }, 00:12:24.729 { 00:12:24.729 "name": "BaseBdev3", 00:12:24.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.729 "is_configured": false, 00:12:24.729 "data_offset": 0, 00:12:24.729 "data_size": 0 00:12:24.729 } 00:12:24.729 ] 00:12:24.729 }' 00:12:24.729 06:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.729 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.294 06:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:25.294 [2024-07-23 06:25:37.801548] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.294 BaseBdev2 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:25.553 06:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:25.553 06:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:25.811 [ 00:12:25.811 { 00:12:25.811 "name": "BaseBdev2", 00:12:25.811 "aliases": [ 00:12:25.811 "6045ce7f-48bc-11ef-a06c-59ddad71024c" 00:12:25.811 ], 00:12:25.811 "product_name": "Malloc disk", 00:12:25.811 "block_size": 512, 00:12:25.811 "num_blocks": 65536, 00:12:25.811 "uuid": "6045ce7f-48bc-11ef-a06c-59ddad71024c", 00:12:25.811 "assigned_rate_limits": { 00:12:25.811 "rw_ios_per_sec": 0, 00:12:25.811 "rw_mbytes_per_sec": 0, 00:12:25.811 "r_mbytes_per_sec": 0, 00:12:25.811 "w_mbytes_per_sec": 0 00:12:25.811 }, 00:12:25.811 "claimed": true, 00:12:25.811 "claim_type": "exclusive_write", 00:12:25.811 "zoned": false, 00:12:25.811 "supported_io_types": { 00:12:25.811 "read": true, 00:12:25.811 "write": true, 00:12:25.811 "unmap": true, 00:12:25.811 "flush": true, 00:12:25.811 "reset": true, 00:12:25.811 "nvme_admin": false, 00:12:25.811 "nvme_io": false, 00:12:25.811 "nvme_io_md": false, 00:12:25.811 "write_zeroes": true, 00:12:25.811 "zcopy": true, 00:12:25.811 "get_zone_info": false, 00:12:25.811 "zone_management": false, 00:12:25.811 "zone_append": false, 00:12:25.811 "compare": false, 00:12:25.811 "compare_and_write": false, 00:12:25.811 "abort": true, 00:12:25.811 "seek_hole": false, 00:12:25.811 "seek_data": false, 00:12:25.811 "copy": true, 00:12:25.811 "nvme_iov_md": false 00:12:25.811 }, 00:12:25.811 "memory_domains": [ 00:12:25.811 { 00:12:25.811 "dma_device_id": "system", 00:12:25.811 "dma_device_type": 1 00:12:25.811 }, 00:12:25.811 { 00:12:25.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.811 "dma_device_type": 2 00:12:25.811 } 00:12:25.811 ], 00:12:25.811 "driver_specific": {} 00:12:25.811 } 00:12:25.811 ] 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.811 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.812 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.812 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.070 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.070 "name": "Existed_Raid", 00:12:26.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.070 "strip_size_kb": 64, 00:12:26.070 "state": "configuring", 00:12:26.070 "raid_level": "concat", 00:12:26.070 "superblock": false, 00:12:26.070 "num_base_bdevs": 3, 00:12:26.070 "num_base_bdevs_discovered": 2, 00:12:26.070 "num_base_bdevs_operational": 3, 00:12:26.070 "base_bdevs_list": [ 00:12:26.070 { 00:12:26.070 "name": "BaseBdev1", 00:12:26.070 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:26.070 "is_configured": true, 00:12:26.070 "data_offset": 0, 00:12:26.070 "data_size": 65536 00:12:26.070 }, 00:12:26.070 { 00:12:26.070 "name": "BaseBdev2", 00:12:26.070 "uuid": "6045ce7f-48bc-11ef-a06c-59ddad71024c", 00:12:26.070 "is_configured": true, 00:12:26.070 "data_offset": 0, 00:12:26.070 "data_size": 65536 00:12:26.070 }, 00:12:26.070 { 00:12:26.070 "name": "BaseBdev3", 00:12:26.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.071 "is_configured": false, 00:12:26.071 "data_offset": 0, 00:12:26.071 "data_size": 0 00:12:26.071 } 00:12:26.071 ] 00:12:26.071 }' 00:12:26.071 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.071 06:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.329 06:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.898 [2024-07-23 06:25:39.113544] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.898 [2024-07-23 06:25:39.113574] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2825f1434a00 00:12:26.898 [2024-07-23 06:25:39.113580] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:26.898 [2024-07-23 06:25:39.113603] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2825f1497e20 00:12:26.898 [2024-07-23 06:25:39.113699] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2825f1434a00 00:12:26.898 [2024-07-23 06:25:39.113714] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2825f1434a00 00:12:26.898 [2024-07-23 06:25:39.113764] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.898 BaseBdev3 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.898 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:27.261 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:27.261 [ 00:12:27.261 { 00:12:27.261 "name": "BaseBdev3", 00:12:27.261 "aliases": [ 00:12:27.261 "610e021a-48bc-11ef-a06c-59ddad71024c" 00:12:27.261 ], 00:12:27.261 "product_name": "Malloc disk", 00:12:27.261 "block_size": 512, 00:12:27.261 "num_blocks": 65536, 00:12:27.261 "uuid": "610e021a-48bc-11ef-a06c-59ddad71024c", 00:12:27.261 "assigned_rate_limits": { 00:12:27.261 "rw_ios_per_sec": 0, 00:12:27.261 "rw_mbytes_per_sec": 0, 00:12:27.262 "r_mbytes_per_sec": 0, 00:12:27.262 "w_mbytes_per_sec": 0 00:12:27.262 }, 00:12:27.262 "claimed": true, 00:12:27.262 "claim_type": "exclusive_write", 00:12:27.262 "zoned": false, 00:12:27.262 "supported_io_types": { 00:12:27.262 "read": true, 00:12:27.262 "write": true, 00:12:27.262 "unmap": true, 00:12:27.262 "flush": true, 00:12:27.262 "reset": true, 00:12:27.262 "nvme_admin": false, 00:12:27.262 "nvme_io": false, 00:12:27.262 "nvme_io_md": false, 00:12:27.262 "write_zeroes": true, 00:12:27.262 "zcopy": true, 00:12:27.262 "get_zone_info": false, 00:12:27.262 "zone_management": false, 00:12:27.262 "zone_append": false, 00:12:27.262 "compare": false, 00:12:27.262 "compare_and_write": false, 00:12:27.262 "abort": true, 00:12:27.262 "seek_hole": false, 00:12:27.262 "seek_data": false, 00:12:27.262 "copy": true, 00:12:27.262 "nvme_iov_md": false 00:12:27.262 }, 00:12:27.262 "memory_domains": [ 00:12:27.262 { 00:12:27.262 "dma_device_id": "system", 00:12:27.262 "dma_device_type": 1 00:12:27.262 }, 00:12:27.262 { 00:12:27.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.262 "dma_device_type": 2 00:12:27.262 } 00:12:27.262 ], 00:12:27.262 "driver_specific": {} 00:12:27.262 } 00:12:27.262 ] 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.262 06:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.832 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.832 "name": "Existed_Raid", 00:12:27.832 "uuid": "610e0874-48bc-11ef-a06c-59ddad71024c", 00:12:27.832 "strip_size_kb": 64, 00:12:27.832 "state": "online", 00:12:27.832 "raid_level": "concat", 00:12:27.832 "superblock": false, 00:12:27.832 "num_base_bdevs": 3, 00:12:27.832 "num_base_bdevs_discovered": 3, 00:12:27.832 "num_base_bdevs_operational": 3, 00:12:27.832 "base_bdevs_list": [ 00:12:27.832 { 00:12:27.832 "name": "BaseBdev1", 00:12:27.832 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:27.832 "is_configured": true, 00:12:27.832 "data_offset": 0, 00:12:27.832 "data_size": 65536 00:12:27.832 }, 00:12:27.832 { 00:12:27.832 "name": "BaseBdev2", 00:12:27.832 "uuid": "6045ce7f-48bc-11ef-a06c-59ddad71024c", 00:12:27.832 "is_configured": true, 00:12:27.832 "data_offset": 0, 00:12:27.832 "data_size": 65536 00:12:27.832 }, 00:12:27.832 { 00:12:27.832 "name": "BaseBdev3", 00:12:27.832 "uuid": "610e021a-48bc-11ef-a06c-59ddad71024c", 00:12:27.832 "is_configured": true, 00:12:27.832 "data_offset": 0, 00:12:27.832 "data_size": 65536 00:12:27.832 } 00:12:27.832 ] 00:12:27.832 }' 00:12:27.832 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.832 06:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:28.090 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:28.349 [2024-07-23 06:25:40.625507] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.349 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:28.349 "name": "Existed_Raid", 00:12:28.349 "aliases": [ 00:12:28.349 "610e0874-48bc-11ef-a06c-59ddad71024c" 00:12:28.349 ], 00:12:28.349 "product_name": "Raid Volume", 00:12:28.349 "block_size": 512, 00:12:28.349 "num_blocks": 196608, 00:12:28.349 "uuid": "610e0874-48bc-11ef-a06c-59ddad71024c", 00:12:28.349 "assigned_rate_limits": { 00:12:28.349 "rw_ios_per_sec": 0, 00:12:28.349 "rw_mbytes_per_sec": 0, 00:12:28.349 "r_mbytes_per_sec": 0, 00:12:28.349 "w_mbytes_per_sec": 0 00:12:28.349 }, 00:12:28.349 "claimed": false, 00:12:28.349 "zoned": false, 00:12:28.349 "supported_io_types": { 00:12:28.349 "read": true, 00:12:28.349 "write": true, 00:12:28.349 "unmap": true, 00:12:28.349 "flush": true, 00:12:28.349 "reset": true, 00:12:28.349 "nvme_admin": false, 00:12:28.349 "nvme_io": false, 00:12:28.349 "nvme_io_md": false, 00:12:28.349 "write_zeroes": true, 00:12:28.349 "zcopy": false, 00:12:28.349 "get_zone_info": false, 00:12:28.349 "zone_management": false, 00:12:28.349 "zone_append": false, 00:12:28.349 "compare": false, 00:12:28.349 "compare_and_write": false, 00:12:28.349 "abort": false, 00:12:28.349 "seek_hole": false, 00:12:28.349 "seek_data": false, 00:12:28.349 "copy": false, 00:12:28.349 "nvme_iov_md": false 00:12:28.349 }, 00:12:28.349 "memory_domains": [ 00:12:28.349 { 00:12:28.349 "dma_device_id": "system", 00:12:28.349 "dma_device_type": 1 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.349 "dma_device_type": 2 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "dma_device_id": "system", 00:12:28.349 "dma_device_type": 1 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.349 "dma_device_type": 2 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "dma_device_id": "system", 00:12:28.349 "dma_device_type": 1 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.349 "dma_device_type": 2 00:12:28.349 } 00:12:28.349 ], 00:12:28.349 "driver_specific": { 00:12:28.349 "raid": { 00:12:28.349 "uuid": "610e0874-48bc-11ef-a06c-59ddad71024c", 00:12:28.349 "strip_size_kb": 64, 00:12:28.349 "state": "online", 00:12:28.349 "raid_level": "concat", 00:12:28.349 "superblock": false, 00:12:28.349 "num_base_bdevs": 3, 00:12:28.349 "num_base_bdevs_discovered": 3, 00:12:28.349 "num_base_bdevs_operational": 3, 00:12:28.349 "base_bdevs_list": [ 00:12:28.349 { 00:12:28.349 "name": "BaseBdev1", 00:12:28.349 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:28.349 "is_configured": true, 00:12:28.349 "data_offset": 0, 00:12:28.349 "data_size": 65536 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "name": "BaseBdev2", 00:12:28.349 "uuid": "6045ce7f-48bc-11ef-a06c-59ddad71024c", 00:12:28.349 "is_configured": true, 00:12:28.349 "data_offset": 0, 00:12:28.349 "data_size": 65536 00:12:28.349 }, 00:12:28.349 { 00:12:28.349 "name": "BaseBdev3", 00:12:28.349 "uuid": "610e021a-48bc-11ef-a06c-59ddad71024c", 00:12:28.349 "is_configured": true, 00:12:28.349 "data_offset": 0, 00:12:28.349 "data_size": 65536 00:12:28.349 } 00:12:28.349 ] 00:12:28.349 } 00:12:28.349 } 00:12:28.349 }' 00:12:28.349 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.349 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:28.349 BaseBdev2 00:12:28.349 BaseBdev3' 00:12:28.349 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.349 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:28.349 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.609 "name": "BaseBdev1", 00:12:28.609 "aliases": [ 00:12:28.609 "5ecfa546-48bc-11ef-a06c-59ddad71024c" 00:12:28.609 ], 00:12:28.609 "product_name": "Malloc disk", 00:12:28.609 "block_size": 512, 00:12:28.609 "num_blocks": 65536, 00:12:28.609 "uuid": "5ecfa546-48bc-11ef-a06c-59ddad71024c", 00:12:28.609 "assigned_rate_limits": { 00:12:28.609 "rw_ios_per_sec": 0, 00:12:28.609 "rw_mbytes_per_sec": 0, 00:12:28.609 "r_mbytes_per_sec": 0, 00:12:28.609 "w_mbytes_per_sec": 0 00:12:28.609 }, 00:12:28.609 "claimed": true, 00:12:28.609 "claim_type": "exclusive_write", 00:12:28.609 "zoned": false, 00:12:28.609 "supported_io_types": { 00:12:28.609 "read": true, 00:12:28.609 "write": true, 00:12:28.609 "unmap": true, 00:12:28.609 "flush": true, 00:12:28.609 "reset": true, 00:12:28.609 "nvme_admin": false, 00:12:28.609 "nvme_io": false, 00:12:28.609 "nvme_io_md": false, 00:12:28.609 "write_zeroes": true, 00:12:28.609 "zcopy": true, 00:12:28.609 "get_zone_info": false, 00:12:28.609 "zone_management": false, 00:12:28.609 "zone_append": false, 00:12:28.609 "compare": false, 00:12:28.609 "compare_and_write": false, 00:12:28.609 "abort": true, 00:12:28.609 "seek_hole": false, 00:12:28.609 "seek_data": false, 00:12:28.609 "copy": true, 00:12:28.609 "nvme_iov_md": false 00:12:28.609 }, 00:12:28.609 "memory_domains": [ 00:12:28.609 { 00:12:28.609 "dma_device_id": "system", 00:12:28.609 "dma_device_type": 1 00:12:28.609 }, 00:12:28.609 { 00:12:28.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.609 "dma_device_type": 2 00:12:28.609 } 00:12:28.609 ], 00:12:28.609 "driver_specific": {} 00:12:28.609 }' 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:28.609 06:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.868 "name": "BaseBdev2", 00:12:28.868 "aliases": [ 00:12:28.868 "6045ce7f-48bc-11ef-a06c-59ddad71024c" 00:12:28.868 ], 00:12:28.868 "product_name": "Malloc disk", 00:12:28.868 "block_size": 512, 00:12:28.868 "num_blocks": 65536, 00:12:28.868 "uuid": "6045ce7f-48bc-11ef-a06c-59ddad71024c", 00:12:28.868 "assigned_rate_limits": { 00:12:28.868 "rw_ios_per_sec": 0, 00:12:28.868 "rw_mbytes_per_sec": 0, 00:12:28.868 "r_mbytes_per_sec": 0, 00:12:28.868 "w_mbytes_per_sec": 0 00:12:28.868 }, 00:12:28.868 "claimed": true, 00:12:28.868 "claim_type": "exclusive_write", 00:12:28.868 "zoned": false, 00:12:28.868 "supported_io_types": { 00:12:28.868 "read": true, 00:12:28.868 "write": true, 00:12:28.868 "unmap": true, 00:12:28.868 "flush": true, 00:12:28.868 "reset": true, 00:12:28.868 "nvme_admin": false, 00:12:28.868 "nvme_io": false, 00:12:28.868 "nvme_io_md": false, 00:12:28.868 "write_zeroes": true, 00:12:28.868 "zcopy": true, 00:12:28.868 "get_zone_info": false, 00:12:28.868 "zone_management": false, 00:12:28.868 "zone_append": false, 00:12:28.868 "compare": false, 00:12:28.868 "compare_and_write": false, 00:12:28.868 "abort": true, 00:12:28.868 "seek_hole": false, 00:12:28.868 "seek_data": false, 00:12:28.868 "copy": true, 00:12:28.868 "nvme_iov_md": false 00:12:28.868 }, 00:12:28.868 "memory_domains": [ 00:12:28.868 { 00:12:28.868 "dma_device_id": "system", 00:12:28.868 "dma_device_type": 1 00:12:28.868 }, 00:12:28.868 { 00:12:28.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.868 "dma_device_type": 2 00:12:28.868 } 00:12:28.868 ], 00:12:28.868 "driver_specific": {} 00:12:28.868 }' 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.868 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:29.127 "name": "BaseBdev3", 00:12:29.127 "aliases": [ 00:12:29.127 "610e021a-48bc-11ef-a06c-59ddad71024c" 00:12:29.127 ], 00:12:29.127 "product_name": "Malloc disk", 00:12:29.127 "block_size": 512, 00:12:29.127 "num_blocks": 65536, 00:12:29.127 "uuid": "610e021a-48bc-11ef-a06c-59ddad71024c", 00:12:29.127 "assigned_rate_limits": { 00:12:29.127 "rw_ios_per_sec": 0, 00:12:29.127 "rw_mbytes_per_sec": 0, 00:12:29.127 "r_mbytes_per_sec": 0, 00:12:29.127 "w_mbytes_per_sec": 0 00:12:29.127 }, 00:12:29.127 "claimed": true, 00:12:29.127 "claim_type": "exclusive_write", 00:12:29.127 "zoned": false, 00:12:29.127 "supported_io_types": { 00:12:29.127 "read": true, 00:12:29.127 "write": true, 00:12:29.127 "unmap": true, 00:12:29.127 "flush": true, 00:12:29.127 "reset": true, 00:12:29.127 "nvme_admin": false, 00:12:29.127 "nvme_io": false, 00:12:29.127 "nvme_io_md": false, 00:12:29.127 "write_zeroes": true, 00:12:29.127 "zcopy": true, 00:12:29.127 "get_zone_info": false, 00:12:29.127 "zone_management": false, 00:12:29.127 "zone_append": false, 00:12:29.127 "compare": false, 00:12:29.127 "compare_and_write": false, 00:12:29.127 "abort": true, 00:12:29.127 "seek_hole": false, 00:12:29.127 "seek_data": false, 00:12:29.127 "copy": true, 00:12:29.127 "nvme_iov_md": false 00:12:29.127 }, 00:12:29.127 "memory_domains": [ 00:12:29.127 { 00:12:29.127 "dma_device_id": "system", 00:12:29.127 "dma_device_type": 1 00:12:29.127 }, 00:12:29.127 { 00:12:29.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.127 "dma_device_type": 2 00:12:29.127 } 00:12:29.127 ], 00:12:29.127 "driver_specific": {} 00:12:29.127 }' 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:29.127 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.386 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.386 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:29.386 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.386 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.386 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:29.386 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:29.645 [2024-07-23 06:25:41.929574] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.645 [2024-07-23 06:25:41.929601] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.645 [2024-07-23 06:25:41.929631] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.645 06:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.904 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.904 "name": "Existed_Raid", 00:12:29.904 "uuid": "610e0874-48bc-11ef-a06c-59ddad71024c", 00:12:29.904 "strip_size_kb": 64, 00:12:29.904 "state": "offline", 00:12:29.904 "raid_level": "concat", 00:12:29.904 "superblock": false, 00:12:29.904 "num_base_bdevs": 3, 00:12:29.904 "num_base_bdevs_discovered": 2, 00:12:29.904 "num_base_bdevs_operational": 2, 00:12:29.904 "base_bdevs_list": [ 00:12:29.904 { 00:12:29.904 "name": null, 00:12:29.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.904 "is_configured": false, 00:12:29.904 "data_offset": 0, 00:12:29.904 "data_size": 65536 00:12:29.904 }, 00:12:29.904 { 00:12:29.904 "name": "BaseBdev2", 00:12:29.905 "uuid": "6045ce7f-48bc-11ef-a06c-59ddad71024c", 00:12:29.905 "is_configured": true, 00:12:29.905 "data_offset": 0, 00:12:29.905 "data_size": 65536 00:12:29.905 }, 00:12:29.905 { 00:12:29.905 "name": "BaseBdev3", 00:12:29.905 "uuid": "610e021a-48bc-11ef-a06c-59ddad71024c", 00:12:29.905 "is_configured": true, 00:12:29.905 "data_offset": 0, 00:12:29.905 "data_size": 65536 00:12:29.905 } 00:12:29.905 ] 00:12:29.905 }' 00:12:29.905 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.905 06:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.163 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:30.163 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:30.163 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.163 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:30.422 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:30.422 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.422 06:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:30.681 [2024-07-23 06:25:43.019898] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:30.681 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:30.681 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:30.681 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.681 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:30.942 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:30.942 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.942 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:31.202 [2024-07-23 06:25:43.517800] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:31.203 [2024-07-23 06:25:43.517838] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825f1434a00 name Existed_Raid, state offline 00:12:31.203 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:31.203 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:31.203 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.203 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:31.468 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:31.468 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:31.468 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:31.468 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:31.468 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:31.468 06:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:31.726 BaseBdev2 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:31.726 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:31.985 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.244 [ 00:12:32.244 { 00:12:32.244 "name": "BaseBdev2", 00:12:32.244 "aliases": [ 00:12:32.244 "64050746-48bc-11ef-a06c-59ddad71024c" 00:12:32.244 ], 00:12:32.244 "product_name": "Malloc disk", 00:12:32.244 "block_size": 512, 00:12:32.244 "num_blocks": 65536, 00:12:32.244 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:32.244 "assigned_rate_limits": { 00:12:32.244 "rw_ios_per_sec": 0, 00:12:32.244 "rw_mbytes_per_sec": 0, 00:12:32.244 "r_mbytes_per_sec": 0, 00:12:32.244 "w_mbytes_per_sec": 0 00:12:32.244 }, 00:12:32.244 "claimed": false, 00:12:32.244 "zoned": false, 00:12:32.244 "supported_io_types": { 00:12:32.244 "read": true, 00:12:32.244 "write": true, 00:12:32.244 "unmap": true, 00:12:32.244 "flush": true, 00:12:32.244 "reset": true, 00:12:32.244 "nvme_admin": false, 00:12:32.244 "nvme_io": false, 00:12:32.244 "nvme_io_md": false, 00:12:32.244 "write_zeroes": true, 00:12:32.244 "zcopy": true, 00:12:32.244 "get_zone_info": false, 00:12:32.244 "zone_management": false, 00:12:32.244 "zone_append": false, 00:12:32.244 "compare": false, 00:12:32.244 "compare_and_write": false, 00:12:32.244 "abort": true, 00:12:32.244 "seek_hole": false, 00:12:32.244 "seek_data": false, 00:12:32.244 "copy": true, 00:12:32.244 "nvme_iov_md": false 00:12:32.244 }, 00:12:32.244 "memory_domains": [ 00:12:32.244 { 00:12:32.244 "dma_device_id": "system", 00:12:32.244 "dma_device_type": 1 00:12:32.244 }, 00:12:32.244 { 00:12:32.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.244 "dma_device_type": 2 00:12:32.244 } 00:12:32.244 ], 00:12:32.244 "driver_specific": {} 00:12:32.244 } 00:12:32.244 ] 00:12:32.244 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:32.244 06:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:32.244 06:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:32.244 06:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.503 BaseBdev3 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:32.503 06:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.761 06:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.019 [ 00:12:33.019 { 00:12:33.019 "name": "BaseBdev3", 00:12:33.019 "aliases": [ 00:12:33.019 "647fb63c-48bc-11ef-a06c-59ddad71024c" 00:12:33.019 ], 00:12:33.019 "product_name": "Malloc disk", 00:12:33.019 "block_size": 512, 00:12:33.019 "num_blocks": 65536, 00:12:33.019 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:33.019 "assigned_rate_limits": { 00:12:33.019 "rw_ios_per_sec": 0, 00:12:33.019 "rw_mbytes_per_sec": 0, 00:12:33.019 "r_mbytes_per_sec": 0, 00:12:33.019 "w_mbytes_per_sec": 0 00:12:33.019 }, 00:12:33.019 "claimed": false, 00:12:33.019 "zoned": false, 00:12:33.019 "supported_io_types": { 00:12:33.019 "read": true, 00:12:33.019 "write": true, 00:12:33.019 "unmap": true, 00:12:33.019 "flush": true, 00:12:33.019 "reset": true, 00:12:33.019 "nvme_admin": false, 00:12:33.019 "nvme_io": false, 00:12:33.019 "nvme_io_md": false, 00:12:33.019 "write_zeroes": true, 00:12:33.019 "zcopy": true, 00:12:33.019 "get_zone_info": false, 00:12:33.019 "zone_management": false, 00:12:33.019 "zone_append": false, 00:12:33.019 "compare": false, 00:12:33.019 "compare_and_write": false, 00:12:33.019 "abort": true, 00:12:33.019 "seek_hole": false, 00:12:33.019 "seek_data": false, 00:12:33.019 "copy": true, 00:12:33.019 "nvme_iov_md": false 00:12:33.019 }, 00:12:33.019 "memory_domains": [ 00:12:33.019 { 00:12:33.019 "dma_device_id": "system", 00:12:33.019 "dma_device_type": 1 00:12:33.019 }, 00:12:33.019 { 00:12:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.019 "dma_device_type": 2 00:12:33.019 } 00:12:33.019 ], 00:12:33.019 "driver_specific": {} 00:12:33.019 } 00:12:33.019 ] 00:12:33.019 06:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:33.019 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:33.019 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:33.019 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:33.282 [2024-07-23 06:25:45.707778] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.282 [2024-07-23 06:25:45.707831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.282 [2024-07-23 06:25:45.707842] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.282 [2024-07-23 06:25:45.708406] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.282 06:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.553 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.553 "name": "Existed_Raid", 00:12:33.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.553 "strip_size_kb": 64, 00:12:33.553 "state": "configuring", 00:12:33.553 "raid_level": "concat", 00:12:33.553 "superblock": false, 00:12:33.553 "num_base_bdevs": 3, 00:12:33.553 "num_base_bdevs_discovered": 2, 00:12:33.553 "num_base_bdevs_operational": 3, 00:12:33.553 "base_bdevs_list": [ 00:12:33.553 { 00:12:33.553 "name": "BaseBdev1", 00:12:33.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.553 "is_configured": false, 00:12:33.553 "data_offset": 0, 00:12:33.553 "data_size": 0 00:12:33.553 }, 00:12:33.553 { 00:12:33.553 "name": "BaseBdev2", 00:12:33.553 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:33.553 "is_configured": true, 00:12:33.553 "data_offset": 0, 00:12:33.553 "data_size": 65536 00:12:33.553 }, 00:12:33.553 { 00:12:33.553 "name": "BaseBdev3", 00:12:33.553 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:33.553 "is_configured": true, 00:12:33.553 "data_offset": 0, 00:12:33.553 "data_size": 65536 00:12:33.553 } 00:12:33.553 ] 00:12:33.553 }' 00:12:33.553 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.553 06:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:34.119 [2024-07-23 06:25:46.595803] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.119 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.377 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.377 "name": "Existed_Raid", 00:12:34.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.377 "strip_size_kb": 64, 00:12:34.377 "state": "configuring", 00:12:34.377 "raid_level": "concat", 00:12:34.377 "superblock": false, 00:12:34.377 "num_base_bdevs": 3, 00:12:34.377 "num_base_bdevs_discovered": 1, 00:12:34.377 "num_base_bdevs_operational": 3, 00:12:34.377 "base_bdevs_list": [ 00:12:34.377 { 00:12:34.377 "name": "BaseBdev1", 00:12:34.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.377 "is_configured": false, 00:12:34.377 "data_offset": 0, 00:12:34.377 "data_size": 0 00:12:34.377 }, 00:12:34.377 { 00:12:34.377 "name": null, 00:12:34.377 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:34.377 "is_configured": false, 00:12:34.377 "data_offset": 0, 00:12:34.377 "data_size": 65536 00:12:34.377 }, 00:12:34.377 { 00:12:34.377 "name": "BaseBdev3", 00:12:34.377 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:34.377 "is_configured": true, 00:12:34.377 "data_offset": 0, 00:12:34.377 "data_size": 65536 00:12:34.377 } 00:12:34.377 ] 00:12:34.377 }' 00:12:34.377 06:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.378 06:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.943 06:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.943 06:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:34.943 06:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:34.943 06:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.509 [2024-07-23 06:25:47.724054] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.509 BaseBdev1 00:12:35.509 06:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:35.509 06:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:35.509 06:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.510 06:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:35.510 06:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.510 06:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.510 06:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.768 06:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.026 [ 00:12:36.026 { 00:12:36.026 "name": "BaseBdev1", 00:12:36.026 "aliases": [ 00:12:36.026 "662fdbe2-48bc-11ef-a06c-59ddad71024c" 00:12:36.026 ], 00:12:36.026 "product_name": "Malloc disk", 00:12:36.026 "block_size": 512, 00:12:36.026 "num_blocks": 65536, 00:12:36.026 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:36.026 "assigned_rate_limits": { 00:12:36.026 "rw_ios_per_sec": 0, 00:12:36.026 "rw_mbytes_per_sec": 0, 00:12:36.026 "r_mbytes_per_sec": 0, 00:12:36.026 "w_mbytes_per_sec": 0 00:12:36.026 }, 00:12:36.026 "claimed": true, 00:12:36.026 "claim_type": "exclusive_write", 00:12:36.026 "zoned": false, 00:12:36.026 "supported_io_types": { 00:12:36.026 "read": true, 00:12:36.026 "write": true, 00:12:36.026 "unmap": true, 00:12:36.026 "flush": true, 00:12:36.026 "reset": true, 00:12:36.026 "nvme_admin": false, 00:12:36.026 "nvme_io": false, 00:12:36.026 "nvme_io_md": false, 00:12:36.026 "write_zeroes": true, 00:12:36.026 "zcopy": true, 00:12:36.026 "get_zone_info": false, 00:12:36.026 "zone_management": false, 00:12:36.026 "zone_append": false, 00:12:36.026 "compare": false, 00:12:36.026 "compare_and_write": false, 00:12:36.026 "abort": true, 00:12:36.026 "seek_hole": false, 00:12:36.026 "seek_data": false, 00:12:36.026 "copy": true, 00:12:36.026 "nvme_iov_md": false 00:12:36.026 }, 00:12:36.026 "memory_domains": [ 00:12:36.026 { 00:12:36.026 "dma_device_id": "system", 00:12:36.026 "dma_device_type": 1 00:12:36.026 }, 00:12:36.026 { 00:12:36.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.026 "dma_device_type": 2 00:12:36.026 } 00:12:36.026 ], 00:12:36.026 "driver_specific": {} 00:12:36.026 } 00:12:36.026 ] 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.026 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.284 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.284 "name": "Existed_Raid", 00:12:36.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.284 "strip_size_kb": 64, 00:12:36.284 "state": "configuring", 00:12:36.284 "raid_level": "concat", 00:12:36.284 "superblock": false, 00:12:36.284 "num_base_bdevs": 3, 00:12:36.284 "num_base_bdevs_discovered": 2, 00:12:36.284 "num_base_bdevs_operational": 3, 00:12:36.284 "base_bdevs_list": [ 00:12:36.284 { 00:12:36.284 "name": "BaseBdev1", 00:12:36.284 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:36.284 "is_configured": true, 00:12:36.284 "data_offset": 0, 00:12:36.284 "data_size": 65536 00:12:36.284 }, 00:12:36.284 { 00:12:36.284 "name": null, 00:12:36.284 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:36.284 "is_configured": false, 00:12:36.284 "data_offset": 0, 00:12:36.284 "data_size": 65536 00:12:36.284 }, 00:12:36.284 { 00:12:36.284 "name": "BaseBdev3", 00:12:36.284 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:36.284 "is_configured": true, 00:12:36.284 "data_offset": 0, 00:12:36.284 "data_size": 65536 00:12:36.284 } 00:12:36.284 ] 00:12:36.284 }' 00:12:36.284 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.284 06:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.550 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.550 06:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:36.808 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:36.808 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:37.066 [2024-07-23 06:25:49.411884] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.066 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.324 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.324 "name": "Existed_Raid", 00:12:37.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.324 "strip_size_kb": 64, 00:12:37.324 "state": "configuring", 00:12:37.324 "raid_level": "concat", 00:12:37.324 "superblock": false, 00:12:37.324 "num_base_bdevs": 3, 00:12:37.324 "num_base_bdevs_discovered": 1, 00:12:37.324 "num_base_bdevs_operational": 3, 00:12:37.324 "base_bdevs_list": [ 00:12:37.324 { 00:12:37.324 "name": "BaseBdev1", 00:12:37.324 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:37.324 "is_configured": true, 00:12:37.324 "data_offset": 0, 00:12:37.324 "data_size": 65536 00:12:37.324 }, 00:12:37.324 { 00:12:37.324 "name": null, 00:12:37.325 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:37.325 "is_configured": false, 00:12:37.325 "data_offset": 0, 00:12:37.325 "data_size": 65536 00:12:37.325 }, 00:12:37.325 { 00:12:37.325 "name": null, 00:12:37.325 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:37.325 "is_configured": false, 00:12:37.325 "data_offset": 0, 00:12:37.325 "data_size": 65536 00:12:37.325 } 00:12:37.325 ] 00:12:37.325 }' 00:12:37.325 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.325 06:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.583 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.583 06:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.842 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:37.842 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:38.101 [2024-07-23 06:25:50.507932] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.101 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.359 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.359 "name": "Existed_Raid", 00:12:38.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.359 "strip_size_kb": 64, 00:12:38.359 "state": "configuring", 00:12:38.359 "raid_level": "concat", 00:12:38.359 "superblock": false, 00:12:38.359 "num_base_bdevs": 3, 00:12:38.359 "num_base_bdevs_discovered": 2, 00:12:38.359 "num_base_bdevs_operational": 3, 00:12:38.359 "base_bdevs_list": [ 00:12:38.359 { 00:12:38.359 "name": "BaseBdev1", 00:12:38.359 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:38.359 "is_configured": true, 00:12:38.359 "data_offset": 0, 00:12:38.359 "data_size": 65536 00:12:38.359 }, 00:12:38.359 { 00:12:38.359 "name": null, 00:12:38.359 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:38.359 "is_configured": false, 00:12:38.359 "data_offset": 0, 00:12:38.359 "data_size": 65536 00:12:38.359 }, 00:12:38.359 { 00:12:38.359 "name": "BaseBdev3", 00:12:38.359 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:38.359 "is_configured": true, 00:12:38.359 "data_offset": 0, 00:12:38.359 "data_size": 65536 00:12:38.359 } 00:12:38.359 ] 00:12:38.359 }' 00:12:38.359 06:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.359 06:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.618 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.618 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.876 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:38.876 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:39.134 [2024-07-23 06:25:51.599968] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.134 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.393 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.393 "name": "Existed_Raid", 00:12:39.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.393 "strip_size_kb": 64, 00:12:39.393 "state": "configuring", 00:12:39.393 "raid_level": "concat", 00:12:39.393 "superblock": false, 00:12:39.393 "num_base_bdevs": 3, 00:12:39.393 "num_base_bdevs_discovered": 1, 00:12:39.393 "num_base_bdevs_operational": 3, 00:12:39.393 "base_bdevs_list": [ 00:12:39.393 { 00:12:39.393 "name": null, 00:12:39.393 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:39.393 "is_configured": false, 00:12:39.393 "data_offset": 0, 00:12:39.393 "data_size": 65536 00:12:39.393 }, 00:12:39.393 { 00:12:39.393 "name": null, 00:12:39.393 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:39.393 "is_configured": false, 00:12:39.393 "data_offset": 0, 00:12:39.393 "data_size": 65536 00:12:39.393 }, 00:12:39.393 { 00:12:39.393 "name": "BaseBdev3", 00:12:39.393 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:39.393 "is_configured": true, 00:12:39.393 "data_offset": 0, 00:12:39.393 "data_size": 65536 00:12:39.393 } 00:12:39.393 ] 00:12:39.393 }' 00:12:39.393 06:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.393 06:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.959 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.959 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:40.217 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:40.217 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:40.475 [2024-07-23 06:25:52.830591] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:40.475 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.476 06:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.734 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.734 "name": "Existed_Raid", 00:12:40.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.734 "strip_size_kb": 64, 00:12:40.734 "state": "configuring", 00:12:40.734 "raid_level": "concat", 00:12:40.734 "superblock": false, 00:12:40.734 "num_base_bdevs": 3, 00:12:40.734 "num_base_bdevs_discovered": 2, 00:12:40.734 "num_base_bdevs_operational": 3, 00:12:40.734 "base_bdevs_list": [ 00:12:40.734 { 00:12:40.734 "name": null, 00:12:40.734 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:40.734 "is_configured": false, 00:12:40.734 "data_offset": 0, 00:12:40.734 "data_size": 65536 00:12:40.734 }, 00:12:40.734 { 00:12:40.734 "name": "BaseBdev2", 00:12:40.734 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:40.734 "is_configured": true, 00:12:40.734 "data_offset": 0, 00:12:40.734 "data_size": 65536 00:12:40.734 }, 00:12:40.734 { 00:12:40.734 "name": "BaseBdev3", 00:12:40.734 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:40.734 "is_configured": true, 00:12:40.734 "data_offset": 0, 00:12:40.734 "data_size": 65536 00:12:40.734 } 00:12:40.734 ] 00:12:40.734 }' 00:12:40.734 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.734 06:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.299 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.299 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:41.299 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:41.557 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:41.557 06:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.823 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 662fdbe2-48bc-11ef-a06c-59ddad71024c 00:12:42.111 [2024-07-23 06:25:54.402924] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:42.111 [2024-07-23 06:25:54.402955] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2825f1434a00 00:12:42.111 [2024-07-23 06:25:54.402960] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:42.111 [2024-07-23 06:25:54.402984] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2825f1497e20 00:12:42.111 [2024-07-23 06:25:54.403055] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2825f1434a00 00:12:42.111 [2024-07-23 06:25:54.403060] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2825f1434a00 00:12:42.111 [2024-07-23 06:25:54.403099] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.111 NewBaseBdev 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:42.111 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:42.375 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:42.375 [ 00:12:42.375 { 00:12:42.375 "name": "NewBaseBdev", 00:12:42.375 "aliases": [ 00:12:42.375 "662fdbe2-48bc-11ef-a06c-59ddad71024c" 00:12:42.375 ], 00:12:42.375 "product_name": "Malloc disk", 00:12:42.375 "block_size": 512, 00:12:42.375 "num_blocks": 65536, 00:12:42.375 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:42.375 "assigned_rate_limits": { 00:12:42.375 "rw_ios_per_sec": 0, 00:12:42.375 "rw_mbytes_per_sec": 0, 00:12:42.375 "r_mbytes_per_sec": 0, 00:12:42.375 "w_mbytes_per_sec": 0 00:12:42.375 }, 00:12:42.375 "claimed": true, 00:12:42.375 "claim_type": "exclusive_write", 00:12:42.375 "zoned": false, 00:12:42.375 "supported_io_types": { 00:12:42.375 "read": true, 00:12:42.375 "write": true, 00:12:42.375 "unmap": true, 00:12:42.375 "flush": true, 00:12:42.375 "reset": true, 00:12:42.375 "nvme_admin": false, 00:12:42.375 "nvme_io": false, 00:12:42.375 "nvme_io_md": false, 00:12:42.375 "write_zeroes": true, 00:12:42.375 "zcopy": true, 00:12:42.375 "get_zone_info": false, 00:12:42.375 "zone_management": false, 00:12:42.375 "zone_append": false, 00:12:42.375 "compare": false, 00:12:42.375 "compare_and_write": false, 00:12:42.375 "abort": true, 00:12:42.375 "seek_hole": false, 00:12:42.375 "seek_data": false, 00:12:42.375 "copy": true, 00:12:42.375 "nvme_iov_md": false 00:12:42.375 }, 00:12:42.375 "memory_domains": [ 00:12:42.375 { 00:12:42.375 "dma_device_id": "system", 00:12:42.375 "dma_device_type": 1 00:12:42.375 }, 00:12:42.375 { 00:12:42.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.375 "dma_device_type": 2 00:12:42.375 } 00:12:42.375 ], 00:12:42.375 "driver_specific": {} 00:12:42.375 } 00:12:42.375 ] 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.633 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.634 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.634 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.634 06:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.634 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.634 "name": "Existed_Raid", 00:12:42.634 "uuid": "6a2b01dd-48bc-11ef-a06c-59ddad71024c", 00:12:42.634 "strip_size_kb": 64, 00:12:42.634 "state": "online", 00:12:42.634 "raid_level": "concat", 00:12:42.634 "superblock": false, 00:12:42.634 "num_base_bdevs": 3, 00:12:42.634 "num_base_bdevs_discovered": 3, 00:12:42.634 "num_base_bdevs_operational": 3, 00:12:42.634 "base_bdevs_list": [ 00:12:42.634 { 00:12:42.634 "name": "NewBaseBdev", 00:12:42.634 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:42.634 "is_configured": true, 00:12:42.634 "data_offset": 0, 00:12:42.634 "data_size": 65536 00:12:42.634 }, 00:12:42.634 { 00:12:42.634 "name": "BaseBdev2", 00:12:42.634 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:42.634 "is_configured": true, 00:12:42.634 "data_offset": 0, 00:12:42.634 "data_size": 65536 00:12:42.634 }, 00:12:42.634 { 00:12:42.634 "name": "BaseBdev3", 00:12:42.634 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:42.634 "is_configured": true, 00:12:42.634 "data_offset": 0, 00:12:42.634 "data_size": 65536 00:12:42.634 } 00:12:42.634 ] 00:12:42.634 }' 00:12:42.634 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.634 06:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:43.200 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:43.458 [2024-07-23 06:25:55.722962] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.458 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:43.458 "name": "Existed_Raid", 00:12:43.458 "aliases": [ 00:12:43.458 "6a2b01dd-48bc-11ef-a06c-59ddad71024c" 00:12:43.458 ], 00:12:43.458 "product_name": "Raid Volume", 00:12:43.458 "block_size": 512, 00:12:43.458 "num_blocks": 196608, 00:12:43.458 "uuid": "6a2b01dd-48bc-11ef-a06c-59ddad71024c", 00:12:43.458 "assigned_rate_limits": { 00:12:43.458 "rw_ios_per_sec": 0, 00:12:43.458 "rw_mbytes_per_sec": 0, 00:12:43.458 "r_mbytes_per_sec": 0, 00:12:43.458 "w_mbytes_per_sec": 0 00:12:43.458 }, 00:12:43.458 "claimed": false, 00:12:43.458 "zoned": false, 00:12:43.458 "supported_io_types": { 00:12:43.458 "read": true, 00:12:43.458 "write": true, 00:12:43.458 "unmap": true, 00:12:43.458 "flush": true, 00:12:43.458 "reset": true, 00:12:43.458 "nvme_admin": false, 00:12:43.458 "nvme_io": false, 00:12:43.458 "nvme_io_md": false, 00:12:43.458 "write_zeroes": true, 00:12:43.458 "zcopy": false, 00:12:43.458 "get_zone_info": false, 00:12:43.458 "zone_management": false, 00:12:43.458 "zone_append": false, 00:12:43.458 "compare": false, 00:12:43.458 "compare_and_write": false, 00:12:43.458 "abort": false, 00:12:43.458 "seek_hole": false, 00:12:43.458 "seek_data": false, 00:12:43.458 "copy": false, 00:12:43.458 "nvme_iov_md": false 00:12:43.458 }, 00:12:43.458 "memory_domains": [ 00:12:43.458 { 00:12:43.458 "dma_device_id": "system", 00:12:43.458 "dma_device_type": 1 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.458 "dma_device_type": 2 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "dma_device_id": "system", 00:12:43.458 "dma_device_type": 1 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.458 "dma_device_type": 2 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "dma_device_id": "system", 00:12:43.458 "dma_device_type": 1 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.458 "dma_device_type": 2 00:12:43.458 } 00:12:43.458 ], 00:12:43.458 "driver_specific": { 00:12:43.458 "raid": { 00:12:43.458 "uuid": "6a2b01dd-48bc-11ef-a06c-59ddad71024c", 00:12:43.458 "strip_size_kb": 64, 00:12:43.458 "state": "online", 00:12:43.458 "raid_level": "concat", 00:12:43.458 "superblock": false, 00:12:43.458 "num_base_bdevs": 3, 00:12:43.458 "num_base_bdevs_discovered": 3, 00:12:43.458 "num_base_bdevs_operational": 3, 00:12:43.458 "base_bdevs_list": [ 00:12:43.458 { 00:12:43.458 "name": "NewBaseBdev", 00:12:43.458 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:43.458 "is_configured": true, 00:12:43.458 "data_offset": 0, 00:12:43.458 "data_size": 65536 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "name": "BaseBdev2", 00:12:43.458 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:43.458 "is_configured": true, 00:12:43.458 "data_offset": 0, 00:12:43.458 "data_size": 65536 00:12:43.458 }, 00:12:43.458 { 00:12:43.458 "name": "BaseBdev3", 00:12:43.458 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:43.458 "is_configured": true, 00:12:43.458 "data_offset": 0, 00:12:43.458 "data_size": 65536 00:12:43.458 } 00:12:43.458 ] 00:12:43.458 } 00:12:43.458 } 00:12:43.458 }' 00:12:43.458 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.458 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:43.458 BaseBdev2 00:12:43.458 BaseBdev3' 00:12:43.459 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.459 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:43.459 06:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.717 "name": "NewBaseBdev", 00:12:43.717 "aliases": [ 00:12:43.717 "662fdbe2-48bc-11ef-a06c-59ddad71024c" 00:12:43.717 ], 00:12:43.717 "product_name": "Malloc disk", 00:12:43.717 "block_size": 512, 00:12:43.717 "num_blocks": 65536, 00:12:43.717 "uuid": "662fdbe2-48bc-11ef-a06c-59ddad71024c", 00:12:43.717 "assigned_rate_limits": { 00:12:43.717 "rw_ios_per_sec": 0, 00:12:43.717 "rw_mbytes_per_sec": 0, 00:12:43.717 "r_mbytes_per_sec": 0, 00:12:43.717 "w_mbytes_per_sec": 0 00:12:43.717 }, 00:12:43.717 "claimed": true, 00:12:43.717 "claim_type": "exclusive_write", 00:12:43.717 "zoned": false, 00:12:43.717 "supported_io_types": { 00:12:43.717 "read": true, 00:12:43.717 "write": true, 00:12:43.717 "unmap": true, 00:12:43.717 "flush": true, 00:12:43.717 "reset": true, 00:12:43.717 "nvme_admin": false, 00:12:43.717 "nvme_io": false, 00:12:43.717 "nvme_io_md": false, 00:12:43.717 "write_zeroes": true, 00:12:43.717 "zcopy": true, 00:12:43.717 "get_zone_info": false, 00:12:43.717 "zone_management": false, 00:12:43.717 "zone_append": false, 00:12:43.717 "compare": false, 00:12:43.717 "compare_and_write": false, 00:12:43.717 "abort": true, 00:12:43.717 "seek_hole": false, 00:12:43.717 "seek_data": false, 00:12:43.717 "copy": true, 00:12:43.717 "nvme_iov_md": false 00:12:43.717 }, 00:12:43.717 "memory_domains": [ 00:12:43.717 { 00:12:43.717 "dma_device_id": "system", 00:12:43.717 "dma_device_type": 1 00:12:43.717 }, 00:12:43.717 { 00:12:43.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.717 "dma_device_type": 2 00:12:43.717 } 00:12:43.717 ], 00:12:43.717 "driver_specific": {} 00:12:43.717 }' 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.717 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.976 "name": "BaseBdev2", 00:12:43.976 "aliases": [ 00:12:43.976 "64050746-48bc-11ef-a06c-59ddad71024c" 00:12:43.976 ], 00:12:43.976 "product_name": "Malloc disk", 00:12:43.976 "block_size": 512, 00:12:43.976 "num_blocks": 65536, 00:12:43.976 "uuid": "64050746-48bc-11ef-a06c-59ddad71024c", 00:12:43.976 "assigned_rate_limits": { 00:12:43.976 "rw_ios_per_sec": 0, 00:12:43.976 "rw_mbytes_per_sec": 0, 00:12:43.976 "r_mbytes_per_sec": 0, 00:12:43.976 "w_mbytes_per_sec": 0 00:12:43.976 }, 00:12:43.976 "claimed": true, 00:12:43.976 "claim_type": "exclusive_write", 00:12:43.976 "zoned": false, 00:12:43.976 "supported_io_types": { 00:12:43.976 "read": true, 00:12:43.976 "write": true, 00:12:43.976 "unmap": true, 00:12:43.976 "flush": true, 00:12:43.976 "reset": true, 00:12:43.976 "nvme_admin": false, 00:12:43.976 "nvme_io": false, 00:12:43.976 "nvme_io_md": false, 00:12:43.976 "write_zeroes": true, 00:12:43.976 "zcopy": true, 00:12:43.976 "get_zone_info": false, 00:12:43.976 "zone_management": false, 00:12:43.976 "zone_append": false, 00:12:43.976 "compare": false, 00:12:43.976 "compare_and_write": false, 00:12:43.976 "abort": true, 00:12:43.976 "seek_hole": false, 00:12:43.976 "seek_data": false, 00:12:43.976 "copy": true, 00:12:43.976 "nvme_iov_md": false 00:12:43.976 }, 00:12:43.976 "memory_domains": [ 00:12:43.976 { 00:12:43.976 "dma_device_id": "system", 00:12:43.976 "dma_device_type": 1 00:12:43.976 }, 00:12:43.976 { 00:12:43.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.976 "dma_device_type": 2 00:12:43.976 } 00:12:43.976 ], 00:12:43.976 "driver_specific": {} 00:12:43.976 }' 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:43.976 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.234 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.234 "name": "BaseBdev3", 00:12:44.234 "aliases": [ 00:12:44.234 "647fb63c-48bc-11ef-a06c-59ddad71024c" 00:12:44.234 ], 00:12:44.234 "product_name": "Malloc disk", 00:12:44.234 "block_size": 512, 00:12:44.234 "num_blocks": 65536, 00:12:44.234 "uuid": "647fb63c-48bc-11ef-a06c-59ddad71024c", 00:12:44.234 "assigned_rate_limits": { 00:12:44.234 "rw_ios_per_sec": 0, 00:12:44.234 "rw_mbytes_per_sec": 0, 00:12:44.234 "r_mbytes_per_sec": 0, 00:12:44.234 "w_mbytes_per_sec": 0 00:12:44.234 }, 00:12:44.234 "claimed": true, 00:12:44.234 "claim_type": "exclusive_write", 00:12:44.234 "zoned": false, 00:12:44.234 "supported_io_types": { 00:12:44.234 "read": true, 00:12:44.234 "write": true, 00:12:44.234 "unmap": true, 00:12:44.234 "flush": true, 00:12:44.234 "reset": true, 00:12:44.234 "nvme_admin": false, 00:12:44.234 "nvme_io": false, 00:12:44.234 "nvme_io_md": false, 00:12:44.234 "write_zeroes": true, 00:12:44.234 "zcopy": true, 00:12:44.234 "get_zone_info": false, 00:12:44.234 "zone_management": false, 00:12:44.234 "zone_append": false, 00:12:44.234 "compare": false, 00:12:44.234 "compare_and_write": false, 00:12:44.234 "abort": true, 00:12:44.234 "seek_hole": false, 00:12:44.234 "seek_data": false, 00:12:44.234 "copy": true, 00:12:44.234 "nvme_iov_md": false 00:12:44.234 }, 00:12:44.234 "memory_domains": [ 00:12:44.234 { 00:12:44.234 "dma_device_id": "system", 00:12:44.234 "dma_device_type": 1 00:12:44.234 }, 00:12:44.234 { 00:12:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.234 "dma_device_type": 2 00:12:44.235 } 00:12:44.235 ], 00:12:44.235 "driver_specific": {} 00:12:44.235 }' 00:12:44.235 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.493 06:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:44.751 [2024-07-23 06:25:57.071019] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.751 [2024-07-23 06:25:57.071046] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.751 [2024-07-23 06:25:57.071071] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.751 [2024-07-23 06:25:57.071085] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.751 [2024-07-23 06:25:57.071090] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825f1434a00 name Existed_Raid, state offline 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 54068 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 54068 ']' 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 54068 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 54068 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54068' 00:12:44.751 killing process with pid 54068 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 54068 00:12:44.751 [2024-07-23 06:25:57.098981] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.751 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 54068 00:12:44.751 [2024-07-23 06:25:57.117438] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:45.009 00:12:45.009 real 0m24.588s 00:12:45.009 user 0m44.832s 00:12:45.009 sys 0m3.528s 00:12:45.009 ************************************ 00:12:45.009 END TEST raid_state_function_test 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.009 ************************************ 00:12:45.009 06:25:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:45.009 06:25:57 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:45.009 06:25:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:45.009 06:25:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.009 06:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.009 ************************************ 00:12:45.009 START TEST raid_state_function_test_sb 00:12:45.009 ************************************ 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54797 00:12:45.009 Process raid pid: 54797 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54797' 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54797 /var/tmp/spdk-raid.sock 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54797 ']' 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.009 06:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.009 [2024-07-23 06:25:57.377005] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:45.009 [2024-07-23 06:25:57.377280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:45.575 EAL: TSC is not safe to use in SMP mode 00:12:45.575 EAL: TSC is not invariant 00:12:45.575 [2024-07-23 06:25:57.944054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.575 [2024-07-23 06:25:58.037290] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:45.575 [2024-07-23 06:25:58.039443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.575 [2024-07-23 06:25:58.040277] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.575 [2024-07-23 06:25:58.040293] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.142 06:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.142 06:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:46.142 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:46.400 [2024-07-23 06:25:58.696367] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.400 [2024-07-23 06:25:58.696432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.400 [2024-07-23 06:25:58.696438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.400 [2024-07-23 06:25:58.696447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.400 [2024-07-23 06:25:58.696451] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.400 [2024-07-23 06:25:58.696458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.400 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.658 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.658 "name": "Existed_Raid", 00:12:46.658 "uuid": "6cba2085-48bc-11ef-a06c-59ddad71024c", 00:12:46.658 "strip_size_kb": 64, 00:12:46.658 "state": "configuring", 00:12:46.658 "raid_level": "concat", 00:12:46.658 "superblock": true, 00:12:46.658 "num_base_bdevs": 3, 00:12:46.658 "num_base_bdevs_discovered": 0, 00:12:46.658 "num_base_bdevs_operational": 3, 00:12:46.658 "base_bdevs_list": [ 00:12:46.658 { 00:12:46.658 "name": "BaseBdev1", 00:12:46.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.658 "is_configured": false, 00:12:46.658 "data_offset": 0, 00:12:46.658 "data_size": 0 00:12:46.658 }, 00:12:46.658 { 00:12:46.658 "name": "BaseBdev2", 00:12:46.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.658 "is_configured": false, 00:12:46.658 "data_offset": 0, 00:12:46.658 "data_size": 0 00:12:46.658 }, 00:12:46.658 { 00:12:46.658 "name": "BaseBdev3", 00:12:46.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.658 "is_configured": false, 00:12:46.658 "data_offset": 0, 00:12:46.658 "data_size": 0 00:12:46.658 } 00:12:46.658 ] 00:12:46.658 }' 00:12:46.658 06:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.658 06:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 06:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:47.481 [2024-07-23 06:25:59.704368] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.481 [2024-07-23 06:25:59.704397] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3abe234500 name Existed_Raid, state configuring 00:12:47.481 06:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:47.481 [2024-07-23 06:25:59.992460] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.481 [2024-07-23 06:25:59.992549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.481 [2024-07-23 06:25:59.992570] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.481 [2024-07-23 06:25:59.992579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.481 [2024-07-23 06:25:59.992582] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.481 [2024-07-23 06:25:59.992589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.739 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.739 [2024-07-23 06:26:00.245502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.739 BaseBdev1 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:47.997 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.255 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.513 [ 00:12:48.513 { 00:12:48.513 "name": "BaseBdev1", 00:12:48.513 "aliases": [ 00:12:48.513 "6da65ad8-48bc-11ef-a06c-59ddad71024c" 00:12:48.513 ], 00:12:48.513 "product_name": "Malloc disk", 00:12:48.513 "block_size": 512, 00:12:48.513 "num_blocks": 65536, 00:12:48.513 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:48.513 "assigned_rate_limits": { 00:12:48.513 "rw_ios_per_sec": 0, 00:12:48.513 "rw_mbytes_per_sec": 0, 00:12:48.513 "r_mbytes_per_sec": 0, 00:12:48.513 "w_mbytes_per_sec": 0 00:12:48.513 }, 00:12:48.513 "claimed": true, 00:12:48.513 "claim_type": "exclusive_write", 00:12:48.513 "zoned": false, 00:12:48.513 "supported_io_types": { 00:12:48.513 "read": true, 00:12:48.513 "write": true, 00:12:48.513 "unmap": true, 00:12:48.513 "flush": true, 00:12:48.513 "reset": true, 00:12:48.513 "nvme_admin": false, 00:12:48.513 "nvme_io": false, 00:12:48.513 "nvme_io_md": false, 00:12:48.513 "write_zeroes": true, 00:12:48.513 "zcopy": true, 00:12:48.513 "get_zone_info": false, 00:12:48.513 "zone_management": false, 00:12:48.513 "zone_append": false, 00:12:48.513 "compare": false, 00:12:48.513 "compare_and_write": false, 00:12:48.513 "abort": true, 00:12:48.513 "seek_hole": false, 00:12:48.513 "seek_data": false, 00:12:48.513 "copy": true, 00:12:48.513 "nvme_iov_md": false 00:12:48.513 }, 00:12:48.513 "memory_domains": [ 00:12:48.513 { 00:12:48.513 "dma_device_id": "system", 00:12:48.513 "dma_device_type": 1 00:12:48.513 }, 00:12:48.513 { 00:12:48.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.513 "dma_device_type": 2 00:12:48.513 } 00:12:48.513 ], 00:12:48.513 "driver_specific": {} 00:12:48.513 } 00:12:48.513 ] 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.513 06:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.770 06:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.770 "name": "Existed_Raid", 00:12:48.770 "uuid": "6d7fe538-48bc-11ef-a06c-59ddad71024c", 00:12:48.770 "strip_size_kb": 64, 00:12:48.770 "state": "configuring", 00:12:48.770 "raid_level": "concat", 00:12:48.770 "superblock": true, 00:12:48.770 "num_base_bdevs": 3, 00:12:48.770 "num_base_bdevs_discovered": 1, 00:12:48.770 "num_base_bdevs_operational": 3, 00:12:48.770 "base_bdevs_list": [ 00:12:48.770 { 00:12:48.770 "name": "BaseBdev1", 00:12:48.770 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:48.770 "is_configured": true, 00:12:48.770 "data_offset": 2048, 00:12:48.770 "data_size": 63488 00:12:48.770 }, 00:12:48.770 { 00:12:48.770 "name": "BaseBdev2", 00:12:48.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.770 "is_configured": false, 00:12:48.770 "data_offset": 0, 00:12:48.770 "data_size": 0 00:12:48.770 }, 00:12:48.770 { 00:12:48.770 "name": "BaseBdev3", 00:12:48.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.770 "is_configured": false, 00:12:48.770 "data_offset": 0, 00:12:48.770 "data_size": 0 00:12:48.770 } 00:12:48.770 ] 00:12:48.770 }' 00:12:48.770 06:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.770 06:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.028 06:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:49.287 [2024-07-23 06:26:01.748754] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.287 [2024-07-23 06:26:01.748781] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3abe234500 name Existed_Raid, state configuring 00:12:49.287 06:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:49.544 [2024-07-23 06:26:01.988822] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.544 [2024-07-23 06:26:01.989661] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.544 [2024-07-23 06:26:01.989747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.544 [2024-07-23 06:26:01.989768] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.544 [2024-07-23 06:26:01.989777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.544 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.806 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.806 "name": "Existed_Raid", 00:12:49.806 "uuid": "6eb08402-48bc-11ef-a06c-59ddad71024c", 00:12:49.806 "strip_size_kb": 64, 00:12:49.806 "state": "configuring", 00:12:49.806 "raid_level": "concat", 00:12:49.806 "superblock": true, 00:12:49.806 "num_base_bdevs": 3, 00:12:49.806 "num_base_bdevs_discovered": 1, 00:12:49.806 "num_base_bdevs_operational": 3, 00:12:49.806 "base_bdevs_list": [ 00:12:49.806 { 00:12:49.806 "name": "BaseBdev1", 00:12:49.806 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:49.806 "is_configured": true, 00:12:49.806 "data_offset": 2048, 00:12:49.806 "data_size": 63488 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "name": "BaseBdev2", 00:12:49.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.806 "is_configured": false, 00:12:49.806 "data_offset": 0, 00:12:49.806 "data_size": 0 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "name": "BaseBdev3", 00:12:49.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.806 "is_configured": false, 00:12:49.806 "data_offset": 0, 00:12:49.806 "data_size": 0 00:12:49.806 } 00:12:49.806 ] 00:12:49.806 }' 00:12:49.806 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.806 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.378 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.647 [2024-07-23 06:26:02.953011] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.647 BaseBdev2 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:50.647 06:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.906 06:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.165 [ 00:12:51.165 { 00:12:51.165 "name": "BaseBdev2", 00:12:51.165 "aliases": [ 00:12:51.165 "6f439f00-48bc-11ef-a06c-59ddad71024c" 00:12:51.165 ], 00:12:51.165 "product_name": "Malloc disk", 00:12:51.165 "block_size": 512, 00:12:51.165 "num_blocks": 65536, 00:12:51.165 "uuid": "6f439f00-48bc-11ef-a06c-59ddad71024c", 00:12:51.165 "assigned_rate_limits": { 00:12:51.165 "rw_ios_per_sec": 0, 00:12:51.165 "rw_mbytes_per_sec": 0, 00:12:51.165 "r_mbytes_per_sec": 0, 00:12:51.165 "w_mbytes_per_sec": 0 00:12:51.165 }, 00:12:51.165 "claimed": true, 00:12:51.165 "claim_type": "exclusive_write", 00:12:51.165 "zoned": false, 00:12:51.165 "supported_io_types": { 00:12:51.165 "read": true, 00:12:51.165 "write": true, 00:12:51.165 "unmap": true, 00:12:51.165 "flush": true, 00:12:51.165 "reset": true, 00:12:51.165 "nvme_admin": false, 00:12:51.165 "nvme_io": false, 00:12:51.165 "nvme_io_md": false, 00:12:51.165 "write_zeroes": true, 00:12:51.165 "zcopy": true, 00:12:51.165 "get_zone_info": false, 00:12:51.165 "zone_management": false, 00:12:51.165 "zone_append": false, 00:12:51.165 "compare": false, 00:12:51.165 "compare_and_write": false, 00:12:51.165 "abort": true, 00:12:51.165 "seek_hole": false, 00:12:51.165 "seek_data": false, 00:12:51.165 "copy": true, 00:12:51.165 "nvme_iov_md": false 00:12:51.165 }, 00:12:51.165 "memory_domains": [ 00:12:51.165 { 00:12:51.165 "dma_device_id": "system", 00:12:51.165 "dma_device_type": 1 00:12:51.165 }, 00:12:51.165 { 00:12:51.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.165 "dma_device_type": 2 00:12:51.165 } 00:12:51.165 ], 00:12:51.165 "driver_specific": {} 00:12:51.165 } 00:12:51.165 ] 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.165 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.423 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.423 "name": "Existed_Raid", 00:12:51.423 "uuid": "6eb08402-48bc-11ef-a06c-59ddad71024c", 00:12:51.423 "strip_size_kb": 64, 00:12:51.423 "state": "configuring", 00:12:51.423 "raid_level": "concat", 00:12:51.423 "superblock": true, 00:12:51.423 "num_base_bdevs": 3, 00:12:51.423 "num_base_bdevs_discovered": 2, 00:12:51.423 "num_base_bdevs_operational": 3, 00:12:51.423 "base_bdevs_list": [ 00:12:51.423 { 00:12:51.423 "name": "BaseBdev1", 00:12:51.423 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:51.423 "is_configured": true, 00:12:51.423 "data_offset": 2048, 00:12:51.423 "data_size": 63488 00:12:51.423 }, 00:12:51.423 { 00:12:51.423 "name": "BaseBdev2", 00:12:51.423 "uuid": "6f439f00-48bc-11ef-a06c-59ddad71024c", 00:12:51.423 "is_configured": true, 00:12:51.423 "data_offset": 2048, 00:12:51.423 "data_size": 63488 00:12:51.423 }, 00:12:51.423 { 00:12:51.423 "name": "BaseBdev3", 00:12:51.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.423 "is_configured": false, 00:12:51.423 "data_offset": 0, 00:12:51.423 "data_size": 0 00:12:51.423 } 00:12:51.423 ] 00:12:51.423 }' 00:12:51.423 06:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.423 06:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.682 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.940 [2024-07-23 06:26:04.313104] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.940 [2024-07-23 06:26:04.313179] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c3abe234a00 00:12:51.940 [2024-07-23 06:26:04.313185] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:51.940 [2024-07-23 06:26:04.313205] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c3abe297e20 00:12:51.940 [2024-07-23 06:26:04.313256] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c3abe234a00 00:12:51.940 [2024-07-23 06:26:04.313260] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3c3abe234a00 00:12:51.940 [2024-07-23 06:26:04.313280] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.940 BaseBdev3 00:12:51.940 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:51.940 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:51.940 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:51.940 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:51.940 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:51.940 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:51.941 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.199 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.457 [ 00:12:52.458 { 00:12:52.458 "name": "BaseBdev3", 00:12:52.458 "aliases": [ 00:12:52.458 "70132784-48bc-11ef-a06c-59ddad71024c" 00:12:52.458 ], 00:12:52.458 "product_name": "Malloc disk", 00:12:52.458 "block_size": 512, 00:12:52.458 "num_blocks": 65536, 00:12:52.458 "uuid": "70132784-48bc-11ef-a06c-59ddad71024c", 00:12:52.458 "assigned_rate_limits": { 00:12:52.458 "rw_ios_per_sec": 0, 00:12:52.458 "rw_mbytes_per_sec": 0, 00:12:52.458 "r_mbytes_per_sec": 0, 00:12:52.458 "w_mbytes_per_sec": 0 00:12:52.458 }, 00:12:52.458 "claimed": true, 00:12:52.458 "claim_type": "exclusive_write", 00:12:52.458 "zoned": false, 00:12:52.458 "supported_io_types": { 00:12:52.458 "read": true, 00:12:52.458 "write": true, 00:12:52.458 "unmap": true, 00:12:52.458 "flush": true, 00:12:52.458 "reset": true, 00:12:52.458 "nvme_admin": false, 00:12:52.458 "nvme_io": false, 00:12:52.458 "nvme_io_md": false, 00:12:52.458 "write_zeroes": true, 00:12:52.458 "zcopy": true, 00:12:52.458 "get_zone_info": false, 00:12:52.458 "zone_management": false, 00:12:52.458 "zone_append": false, 00:12:52.458 "compare": false, 00:12:52.458 "compare_and_write": false, 00:12:52.458 "abort": true, 00:12:52.458 "seek_hole": false, 00:12:52.458 "seek_data": false, 00:12:52.458 "copy": true, 00:12:52.458 "nvme_iov_md": false 00:12:52.458 }, 00:12:52.458 "memory_domains": [ 00:12:52.458 { 00:12:52.458 "dma_device_id": "system", 00:12:52.458 "dma_device_type": 1 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.458 "dma_device_type": 2 00:12:52.458 } 00:12:52.458 ], 00:12:52.458 "driver_specific": {} 00:12:52.458 } 00:12:52.458 ] 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.458 06:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.716 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.716 "name": "Existed_Raid", 00:12:52.716 "uuid": "6eb08402-48bc-11ef-a06c-59ddad71024c", 00:12:52.716 "strip_size_kb": 64, 00:12:52.716 "state": "online", 00:12:52.716 "raid_level": "concat", 00:12:52.716 "superblock": true, 00:12:52.716 "num_base_bdevs": 3, 00:12:52.716 "num_base_bdevs_discovered": 3, 00:12:52.716 "num_base_bdevs_operational": 3, 00:12:52.716 "base_bdevs_list": [ 00:12:52.716 { 00:12:52.716 "name": "BaseBdev1", 00:12:52.716 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:52.716 "is_configured": true, 00:12:52.716 "data_offset": 2048, 00:12:52.716 "data_size": 63488 00:12:52.716 }, 00:12:52.716 { 00:12:52.716 "name": "BaseBdev2", 00:12:52.716 "uuid": "6f439f00-48bc-11ef-a06c-59ddad71024c", 00:12:52.716 "is_configured": true, 00:12:52.716 "data_offset": 2048, 00:12:52.716 "data_size": 63488 00:12:52.716 }, 00:12:52.716 { 00:12:52.716 "name": "BaseBdev3", 00:12:52.716 "uuid": "70132784-48bc-11ef-a06c-59ddad71024c", 00:12:52.716 "is_configured": true, 00:12:52.716 "data_offset": 2048, 00:12:52.716 "data_size": 63488 00:12:52.716 } 00:12:52.716 ] 00:12:52.716 }' 00:12:52.716 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.716 06:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:52.975 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:53.233 [2024-07-23 06:26:05.709056] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.233 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:53.233 "name": "Existed_Raid", 00:12:53.233 "aliases": [ 00:12:53.233 "6eb08402-48bc-11ef-a06c-59ddad71024c" 00:12:53.233 ], 00:12:53.233 "product_name": "Raid Volume", 00:12:53.233 "block_size": 512, 00:12:53.233 "num_blocks": 190464, 00:12:53.233 "uuid": "6eb08402-48bc-11ef-a06c-59ddad71024c", 00:12:53.233 "assigned_rate_limits": { 00:12:53.233 "rw_ios_per_sec": 0, 00:12:53.233 "rw_mbytes_per_sec": 0, 00:12:53.233 "r_mbytes_per_sec": 0, 00:12:53.233 "w_mbytes_per_sec": 0 00:12:53.233 }, 00:12:53.233 "claimed": false, 00:12:53.233 "zoned": false, 00:12:53.233 "supported_io_types": { 00:12:53.233 "read": true, 00:12:53.233 "write": true, 00:12:53.233 "unmap": true, 00:12:53.233 "flush": true, 00:12:53.233 "reset": true, 00:12:53.233 "nvme_admin": false, 00:12:53.233 "nvme_io": false, 00:12:53.233 "nvme_io_md": false, 00:12:53.233 "write_zeroes": true, 00:12:53.233 "zcopy": false, 00:12:53.233 "get_zone_info": false, 00:12:53.233 "zone_management": false, 00:12:53.233 "zone_append": false, 00:12:53.233 "compare": false, 00:12:53.233 "compare_and_write": false, 00:12:53.233 "abort": false, 00:12:53.233 "seek_hole": false, 00:12:53.233 "seek_data": false, 00:12:53.233 "copy": false, 00:12:53.233 "nvme_iov_md": false 00:12:53.233 }, 00:12:53.233 "memory_domains": [ 00:12:53.233 { 00:12:53.233 "dma_device_id": "system", 00:12:53.233 "dma_device_type": 1 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.233 "dma_device_type": 2 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "dma_device_id": "system", 00:12:53.233 "dma_device_type": 1 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.233 "dma_device_type": 2 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "dma_device_id": "system", 00:12:53.233 "dma_device_type": 1 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.233 "dma_device_type": 2 00:12:53.233 } 00:12:53.233 ], 00:12:53.233 "driver_specific": { 00:12:53.233 "raid": { 00:12:53.233 "uuid": "6eb08402-48bc-11ef-a06c-59ddad71024c", 00:12:53.233 "strip_size_kb": 64, 00:12:53.233 "state": "online", 00:12:53.233 "raid_level": "concat", 00:12:53.233 "superblock": true, 00:12:53.233 "num_base_bdevs": 3, 00:12:53.233 "num_base_bdevs_discovered": 3, 00:12:53.233 "num_base_bdevs_operational": 3, 00:12:53.233 "base_bdevs_list": [ 00:12:53.233 { 00:12:53.233 "name": "BaseBdev1", 00:12:53.233 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:53.233 "is_configured": true, 00:12:53.233 "data_offset": 2048, 00:12:53.233 "data_size": 63488 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "name": "BaseBdev2", 00:12:53.233 "uuid": "6f439f00-48bc-11ef-a06c-59ddad71024c", 00:12:53.233 "is_configured": true, 00:12:53.233 "data_offset": 2048, 00:12:53.233 "data_size": 63488 00:12:53.233 }, 00:12:53.233 { 00:12:53.233 "name": "BaseBdev3", 00:12:53.233 "uuid": "70132784-48bc-11ef-a06c-59ddad71024c", 00:12:53.233 "is_configured": true, 00:12:53.233 "data_offset": 2048, 00:12:53.233 "data_size": 63488 00:12:53.233 } 00:12:53.233 ] 00:12:53.233 } 00:12:53.233 } 00:12:53.233 }' 00:12:53.233 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.233 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:53.233 BaseBdev2 00:12:53.233 BaseBdev3' 00:12:53.233 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.233 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:53.233 06:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.800 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.800 "name": "BaseBdev1", 00:12:53.800 "aliases": [ 00:12:53.800 "6da65ad8-48bc-11ef-a06c-59ddad71024c" 00:12:53.800 ], 00:12:53.800 "product_name": "Malloc disk", 00:12:53.800 "block_size": 512, 00:12:53.800 "num_blocks": 65536, 00:12:53.800 "uuid": "6da65ad8-48bc-11ef-a06c-59ddad71024c", 00:12:53.800 "assigned_rate_limits": { 00:12:53.800 "rw_ios_per_sec": 0, 00:12:53.800 "rw_mbytes_per_sec": 0, 00:12:53.800 "r_mbytes_per_sec": 0, 00:12:53.800 "w_mbytes_per_sec": 0 00:12:53.800 }, 00:12:53.800 "claimed": true, 00:12:53.800 "claim_type": "exclusive_write", 00:12:53.801 "zoned": false, 00:12:53.801 "supported_io_types": { 00:12:53.801 "read": true, 00:12:53.801 "write": true, 00:12:53.801 "unmap": true, 00:12:53.801 "flush": true, 00:12:53.801 "reset": true, 00:12:53.801 "nvme_admin": false, 00:12:53.801 "nvme_io": false, 00:12:53.801 "nvme_io_md": false, 00:12:53.801 "write_zeroes": true, 00:12:53.801 "zcopy": true, 00:12:53.801 "get_zone_info": false, 00:12:53.801 "zone_management": false, 00:12:53.801 "zone_append": false, 00:12:53.801 "compare": false, 00:12:53.801 "compare_and_write": false, 00:12:53.801 "abort": true, 00:12:53.801 "seek_hole": false, 00:12:53.801 "seek_data": false, 00:12:53.801 "copy": true, 00:12:53.801 "nvme_iov_md": false 00:12:53.801 }, 00:12:53.801 "memory_domains": [ 00:12:53.801 { 00:12:53.801 "dma_device_id": "system", 00:12:53.801 "dma_device_type": 1 00:12:53.801 }, 00:12:53.801 { 00:12:53.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.801 "dma_device_type": 2 00:12:53.801 } 00:12:53.801 ], 00:12:53.801 "driver_specific": {} 00:12:53.801 }' 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:53.801 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.058 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.058 "name": "BaseBdev2", 00:12:54.058 "aliases": [ 00:12:54.058 "6f439f00-48bc-11ef-a06c-59ddad71024c" 00:12:54.058 ], 00:12:54.059 "product_name": "Malloc disk", 00:12:54.059 "block_size": 512, 00:12:54.059 "num_blocks": 65536, 00:12:54.059 "uuid": "6f439f00-48bc-11ef-a06c-59ddad71024c", 00:12:54.059 "assigned_rate_limits": { 00:12:54.059 "rw_ios_per_sec": 0, 00:12:54.059 "rw_mbytes_per_sec": 0, 00:12:54.059 "r_mbytes_per_sec": 0, 00:12:54.059 "w_mbytes_per_sec": 0 00:12:54.059 }, 00:12:54.059 "claimed": true, 00:12:54.059 "claim_type": "exclusive_write", 00:12:54.059 "zoned": false, 00:12:54.059 "supported_io_types": { 00:12:54.059 "read": true, 00:12:54.059 "write": true, 00:12:54.059 "unmap": true, 00:12:54.059 "flush": true, 00:12:54.059 "reset": true, 00:12:54.059 "nvme_admin": false, 00:12:54.059 "nvme_io": false, 00:12:54.059 "nvme_io_md": false, 00:12:54.059 "write_zeroes": true, 00:12:54.059 "zcopy": true, 00:12:54.059 "get_zone_info": false, 00:12:54.059 "zone_management": false, 00:12:54.059 "zone_append": false, 00:12:54.059 "compare": false, 00:12:54.059 "compare_and_write": false, 00:12:54.059 "abort": true, 00:12:54.059 "seek_hole": false, 00:12:54.059 "seek_data": false, 00:12:54.059 "copy": true, 00:12:54.059 "nvme_iov_md": false 00:12:54.059 }, 00:12:54.059 "memory_domains": [ 00:12:54.059 { 00:12:54.059 "dma_device_id": "system", 00:12:54.059 "dma_device_type": 1 00:12:54.059 }, 00:12:54.059 { 00:12:54.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.059 "dma_device_type": 2 00:12:54.059 } 00:12:54.059 ], 00:12:54.059 "driver_specific": {} 00:12:54.059 }' 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:54.059 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.317 "name": "BaseBdev3", 00:12:54.317 "aliases": [ 00:12:54.317 "70132784-48bc-11ef-a06c-59ddad71024c" 00:12:54.317 ], 00:12:54.317 "product_name": "Malloc disk", 00:12:54.317 "block_size": 512, 00:12:54.317 "num_blocks": 65536, 00:12:54.317 "uuid": "70132784-48bc-11ef-a06c-59ddad71024c", 00:12:54.317 "assigned_rate_limits": { 00:12:54.317 "rw_ios_per_sec": 0, 00:12:54.317 "rw_mbytes_per_sec": 0, 00:12:54.317 "r_mbytes_per_sec": 0, 00:12:54.317 "w_mbytes_per_sec": 0 00:12:54.317 }, 00:12:54.317 "claimed": true, 00:12:54.317 "claim_type": "exclusive_write", 00:12:54.317 "zoned": false, 00:12:54.317 "supported_io_types": { 00:12:54.317 "read": true, 00:12:54.317 "write": true, 00:12:54.317 "unmap": true, 00:12:54.317 "flush": true, 00:12:54.317 "reset": true, 00:12:54.317 "nvme_admin": false, 00:12:54.317 "nvme_io": false, 00:12:54.317 "nvme_io_md": false, 00:12:54.317 "write_zeroes": true, 00:12:54.317 "zcopy": true, 00:12:54.317 "get_zone_info": false, 00:12:54.317 "zone_management": false, 00:12:54.317 "zone_append": false, 00:12:54.317 "compare": false, 00:12:54.317 "compare_and_write": false, 00:12:54.317 "abort": true, 00:12:54.317 "seek_hole": false, 00:12:54.317 "seek_data": false, 00:12:54.317 "copy": true, 00:12:54.317 "nvme_iov_md": false 00:12:54.317 }, 00:12:54.317 "memory_domains": [ 00:12:54.317 { 00:12:54.317 "dma_device_id": "system", 00:12:54.317 "dma_device_type": 1 00:12:54.317 }, 00:12:54.317 { 00:12:54.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.317 "dma_device_type": 2 00:12:54.317 } 00:12:54.317 ], 00:12:54.317 "driver_specific": {} 00:12:54.317 }' 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.317 06:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:54.595 [2024-07-23 06:26:07.009159] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.595 [2024-07-23 06:26:07.009186] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.595 [2024-07-23 06:26:07.009201] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.595 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.853 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.853 "name": "Existed_Raid", 00:12:54.853 "uuid": "6eb08402-48bc-11ef-a06c-59ddad71024c", 00:12:54.853 "strip_size_kb": 64, 00:12:54.853 "state": "offline", 00:12:54.853 "raid_level": "concat", 00:12:54.853 "superblock": true, 00:12:54.853 "num_base_bdevs": 3, 00:12:54.853 "num_base_bdevs_discovered": 2, 00:12:54.853 "num_base_bdevs_operational": 2, 00:12:54.853 "base_bdevs_list": [ 00:12:54.853 { 00:12:54.853 "name": null, 00:12:54.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.853 "is_configured": false, 00:12:54.853 "data_offset": 2048, 00:12:54.853 "data_size": 63488 00:12:54.853 }, 00:12:54.853 { 00:12:54.853 "name": "BaseBdev2", 00:12:54.853 "uuid": "6f439f00-48bc-11ef-a06c-59ddad71024c", 00:12:54.853 "is_configured": true, 00:12:54.853 "data_offset": 2048, 00:12:54.853 "data_size": 63488 00:12:54.853 }, 00:12:54.853 { 00:12:54.853 "name": "BaseBdev3", 00:12:54.853 "uuid": "70132784-48bc-11ef-a06c-59ddad71024c", 00:12:54.853 "is_configured": true, 00:12:54.853 "data_offset": 2048, 00:12:54.854 "data_size": 63488 00:12:54.854 } 00:12:54.854 ] 00:12:54.854 }' 00:12:54.854 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.854 06:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.419 06:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:55.676 [2024-07-23 06:26:08.151083] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.676 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:55.676 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.676 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.676 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.934 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.934 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.934 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:56.192 [2024-07-23 06:26:08.660984] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.192 [2024-07-23 06:26:08.661017] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3abe234a00 name Existed_Raid, state offline 00:12:56.192 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:56.192 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:56.193 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.193 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:56.450 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:56.450 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:56.450 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:56.450 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:56.450 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:56.450 06:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.708 BaseBdev2 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:56.708 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.966 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.225 [ 00:12:57.225 { 00:12:57.225 "name": "BaseBdev2", 00:12:57.225 "aliases": [ 00:12:57.225 "72f2519b-48bc-11ef-a06c-59ddad71024c" 00:12:57.225 ], 00:12:57.225 "product_name": "Malloc disk", 00:12:57.225 "block_size": 512, 00:12:57.225 "num_blocks": 65536, 00:12:57.225 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:12:57.225 "assigned_rate_limits": { 00:12:57.225 "rw_ios_per_sec": 0, 00:12:57.225 "rw_mbytes_per_sec": 0, 00:12:57.225 "r_mbytes_per_sec": 0, 00:12:57.225 "w_mbytes_per_sec": 0 00:12:57.225 }, 00:12:57.225 "claimed": false, 00:12:57.225 "zoned": false, 00:12:57.225 "supported_io_types": { 00:12:57.225 "read": true, 00:12:57.225 "write": true, 00:12:57.225 "unmap": true, 00:12:57.225 "flush": true, 00:12:57.225 "reset": true, 00:12:57.225 "nvme_admin": false, 00:12:57.225 "nvme_io": false, 00:12:57.225 "nvme_io_md": false, 00:12:57.225 "write_zeroes": true, 00:12:57.225 "zcopy": true, 00:12:57.225 "get_zone_info": false, 00:12:57.225 "zone_management": false, 00:12:57.225 "zone_append": false, 00:12:57.225 "compare": false, 00:12:57.225 "compare_and_write": false, 00:12:57.225 "abort": true, 00:12:57.225 "seek_hole": false, 00:12:57.225 "seek_data": false, 00:12:57.225 "copy": true, 00:12:57.225 "nvme_iov_md": false 00:12:57.225 }, 00:12:57.225 "memory_domains": [ 00:12:57.225 { 00:12:57.225 "dma_device_id": "system", 00:12:57.225 "dma_device_type": 1 00:12:57.225 }, 00:12:57.225 { 00:12:57.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.225 "dma_device_type": 2 00:12:57.225 } 00:12:57.225 ], 00:12:57.225 "driver_specific": {} 00:12:57.225 } 00:12:57.225 ] 00:12:57.225 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:57.225 06:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:57.225 06:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:57.225 06:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.484 BaseBdev3 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:57.484 06:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:57.753 06:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:58.011 [ 00:12:58.011 { 00:12:58.011 "name": "BaseBdev3", 00:12:58.011 "aliases": [ 00:12:58.011 "7363d87c-48bc-11ef-a06c-59ddad71024c" 00:12:58.011 ], 00:12:58.011 "product_name": "Malloc disk", 00:12:58.011 "block_size": 512, 00:12:58.011 "num_blocks": 65536, 00:12:58.011 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:12:58.011 "assigned_rate_limits": { 00:12:58.011 "rw_ios_per_sec": 0, 00:12:58.011 "rw_mbytes_per_sec": 0, 00:12:58.011 "r_mbytes_per_sec": 0, 00:12:58.011 "w_mbytes_per_sec": 0 00:12:58.011 }, 00:12:58.011 "claimed": false, 00:12:58.011 "zoned": false, 00:12:58.011 "supported_io_types": { 00:12:58.011 "read": true, 00:12:58.011 "write": true, 00:12:58.011 "unmap": true, 00:12:58.011 "flush": true, 00:12:58.011 "reset": true, 00:12:58.011 "nvme_admin": false, 00:12:58.011 "nvme_io": false, 00:12:58.011 "nvme_io_md": false, 00:12:58.011 "write_zeroes": true, 00:12:58.011 "zcopy": true, 00:12:58.011 "get_zone_info": false, 00:12:58.011 "zone_management": false, 00:12:58.011 "zone_append": false, 00:12:58.011 "compare": false, 00:12:58.011 "compare_and_write": false, 00:12:58.011 "abort": true, 00:12:58.011 "seek_hole": false, 00:12:58.011 "seek_data": false, 00:12:58.011 "copy": true, 00:12:58.011 "nvme_iov_md": false 00:12:58.011 }, 00:12:58.011 "memory_domains": [ 00:12:58.011 { 00:12:58.011 "dma_device_id": "system", 00:12:58.011 "dma_device_type": 1 00:12:58.011 }, 00:12:58.011 { 00:12:58.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.011 "dma_device_type": 2 00:12:58.011 } 00:12:58.011 ], 00:12:58.011 "driver_specific": {} 00:12:58.011 } 00:12:58.011 ] 00:12:58.011 06:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:58.011 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:58.011 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:58.011 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:58.270 [2024-07-23 06:26:10.558971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.270 [2024-07-23 06:26:10.559026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.270 [2024-07-23 06:26:10.559036] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.270 [2024-07-23 06:26:10.559592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.270 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.529 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.529 "name": "Existed_Raid", 00:12:58.529 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:12:58.529 "strip_size_kb": 64, 00:12:58.529 "state": "configuring", 00:12:58.529 "raid_level": "concat", 00:12:58.529 "superblock": true, 00:12:58.529 "num_base_bdevs": 3, 00:12:58.529 "num_base_bdevs_discovered": 2, 00:12:58.529 "num_base_bdevs_operational": 3, 00:12:58.529 "base_bdevs_list": [ 00:12:58.529 { 00:12:58.529 "name": "BaseBdev1", 00:12:58.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.529 "is_configured": false, 00:12:58.529 "data_offset": 0, 00:12:58.529 "data_size": 0 00:12:58.529 }, 00:12:58.529 { 00:12:58.529 "name": "BaseBdev2", 00:12:58.529 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:12:58.529 "is_configured": true, 00:12:58.529 "data_offset": 2048, 00:12:58.529 "data_size": 63488 00:12:58.529 }, 00:12:58.529 { 00:12:58.529 "name": "BaseBdev3", 00:12:58.529 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:12:58.529 "is_configured": true, 00:12:58.529 "data_offset": 2048, 00:12:58.529 "data_size": 63488 00:12:58.529 } 00:12:58.529 ] 00:12:58.529 }' 00:12:58.529 06:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.529 06:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.787 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:59.045 [2024-07-23 06:26:11.354980] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.045 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.302 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.302 "name": "Existed_Raid", 00:12:59.302 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:12:59.302 "strip_size_kb": 64, 00:12:59.302 "state": "configuring", 00:12:59.302 "raid_level": "concat", 00:12:59.302 "superblock": true, 00:12:59.302 "num_base_bdevs": 3, 00:12:59.302 "num_base_bdevs_discovered": 1, 00:12:59.302 "num_base_bdevs_operational": 3, 00:12:59.302 "base_bdevs_list": [ 00:12:59.302 { 00:12:59.302 "name": "BaseBdev1", 00:12:59.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.302 "is_configured": false, 00:12:59.302 "data_offset": 0, 00:12:59.302 "data_size": 0 00:12:59.302 }, 00:12:59.302 { 00:12:59.302 "name": null, 00:12:59.302 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:12:59.302 "is_configured": false, 00:12:59.302 "data_offset": 2048, 00:12:59.302 "data_size": 63488 00:12:59.302 }, 00:12:59.302 { 00:12:59.302 "name": "BaseBdev3", 00:12:59.302 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:12:59.302 "is_configured": true, 00:12:59.302 "data_offset": 2048, 00:12:59.302 "data_size": 63488 00:12:59.302 } 00:12:59.302 ] 00:12:59.302 }' 00:12:59.302 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.302 06:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.559 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.559 06:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.816 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:59.816 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.075 [2024-07-23 06:26:12.419149] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.075 BaseBdev1 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.075 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:00.333 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.591 [ 00:13:00.591 { 00:13:00.591 "name": "BaseBdev1", 00:13:00.591 "aliases": [ 00:13:00.591 "74e80a0e-48bc-11ef-a06c-59ddad71024c" 00:13:00.591 ], 00:13:00.591 "product_name": "Malloc disk", 00:13:00.591 "block_size": 512, 00:13:00.591 "num_blocks": 65536, 00:13:00.591 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:00.591 "assigned_rate_limits": { 00:13:00.591 "rw_ios_per_sec": 0, 00:13:00.591 "rw_mbytes_per_sec": 0, 00:13:00.591 "r_mbytes_per_sec": 0, 00:13:00.591 "w_mbytes_per_sec": 0 00:13:00.591 }, 00:13:00.591 "claimed": true, 00:13:00.591 "claim_type": "exclusive_write", 00:13:00.591 "zoned": false, 00:13:00.591 "supported_io_types": { 00:13:00.591 "read": true, 00:13:00.591 "write": true, 00:13:00.591 "unmap": true, 00:13:00.591 "flush": true, 00:13:00.591 "reset": true, 00:13:00.591 "nvme_admin": false, 00:13:00.591 "nvme_io": false, 00:13:00.591 "nvme_io_md": false, 00:13:00.591 "write_zeroes": true, 00:13:00.591 "zcopy": true, 00:13:00.591 "get_zone_info": false, 00:13:00.591 "zone_management": false, 00:13:00.591 "zone_append": false, 00:13:00.591 "compare": false, 00:13:00.591 "compare_and_write": false, 00:13:00.591 "abort": true, 00:13:00.591 "seek_hole": false, 00:13:00.591 "seek_data": false, 00:13:00.591 "copy": true, 00:13:00.591 "nvme_iov_md": false 00:13:00.591 }, 00:13:00.591 "memory_domains": [ 00:13:00.591 { 00:13:00.591 "dma_device_id": "system", 00:13:00.591 "dma_device_type": 1 00:13:00.591 }, 00:13:00.591 { 00:13:00.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.591 "dma_device_type": 2 00:13:00.591 } 00:13:00.591 ], 00:13:00.591 "driver_specific": {} 00:13:00.591 } 00:13:00.591 ] 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.591 06:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.850 06:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.850 "name": "Existed_Raid", 00:13:00.850 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:00.850 "strip_size_kb": 64, 00:13:00.850 "state": "configuring", 00:13:00.850 "raid_level": "concat", 00:13:00.850 "superblock": true, 00:13:00.850 "num_base_bdevs": 3, 00:13:00.850 "num_base_bdevs_discovered": 2, 00:13:00.850 "num_base_bdevs_operational": 3, 00:13:00.850 "base_bdevs_list": [ 00:13:00.850 { 00:13:00.850 "name": "BaseBdev1", 00:13:00.850 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:00.850 "is_configured": true, 00:13:00.850 "data_offset": 2048, 00:13:00.850 "data_size": 63488 00:13:00.850 }, 00:13:00.850 { 00:13:00.850 "name": null, 00:13:00.850 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:00.850 "is_configured": false, 00:13:00.850 "data_offset": 2048, 00:13:00.850 "data_size": 63488 00:13:00.850 }, 00:13:00.850 { 00:13:00.850 "name": "BaseBdev3", 00:13:00.850 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:00.850 "is_configured": true, 00:13:00.850 "data_offset": 2048, 00:13:00.850 "data_size": 63488 00:13:00.850 } 00:13:00.850 ] 00:13:00.850 }' 00:13:00.850 06:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.850 06:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.119 06:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.119 06:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:01.377 06:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:01.377 06:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:01.635 [2024-07-23 06:26:14.119085] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.635 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.200 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.200 "name": "Existed_Raid", 00:13:02.200 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:02.200 "strip_size_kb": 64, 00:13:02.200 "state": "configuring", 00:13:02.200 "raid_level": "concat", 00:13:02.200 "superblock": true, 00:13:02.200 "num_base_bdevs": 3, 00:13:02.200 "num_base_bdevs_discovered": 1, 00:13:02.200 "num_base_bdevs_operational": 3, 00:13:02.200 "base_bdevs_list": [ 00:13:02.200 { 00:13:02.200 "name": "BaseBdev1", 00:13:02.200 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:02.200 "is_configured": true, 00:13:02.200 "data_offset": 2048, 00:13:02.200 "data_size": 63488 00:13:02.200 }, 00:13:02.200 { 00:13:02.200 "name": null, 00:13:02.200 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:02.200 "is_configured": false, 00:13:02.200 "data_offset": 2048, 00:13:02.200 "data_size": 63488 00:13:02.200 }, 00:13:02.200 { 00:13:02.200 "name": null, 00:13:02.200 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:02.200 "is_configured": false, 00:13:02.200 "data_offset": 2048, 00:13:02.200 "data_size": 63488 00:13:02.200 } 00:13:02.200 ] 00:13:02.200 }' 00:13:02.200 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.200 06:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.457 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.457 06:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:02.715 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:02.715 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:02.973 [2024-07-23 06:26:15.251103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.973 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.232 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.232 "name": "Existed_Raid", 00:13:03.232 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:03.232 "strip_size_kb": 64, 00:13:03.232 "state": "configuring", 00:13:03.232 "raid_level": "concat", 00:13:03.232 "superblock": true, 00:13:03.232 "num_base_bdevs": 3, 00:13:03.232 "num_base_bdevs_discovered": 2, 00:13:03.232 "num_base_bdevs_operational": 3, 00:13:03.232 "base_bdevs_list": [ 00:13:03.232 { 00:13:03.232 "name": "BaseBdev1", 00:13:03.232 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:03.232 "is_configured": true, 00:13:03.232 "data_offset": 2048, 00:13:03.232 "data_size": 63488 00:13:03.232 }, 00:13:03.232 { 00:13:03.232 "name": null, 00:13:03.232 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:03.232 "is_configured": false, 00:13:03.232 "data_offset": 2048, 00:13:03.232 "data_size": 63488 00:13:03.232 }, 00:13:03.232 { 00:13:03.232 "name": "BaseBdev3", 00:13:03.232 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:03.232 "is_configured": true, 00:13:03.232 "data_offset": 2048, 00:13:03.232 "data_size": 63488 00:13:03.232 } 00:13:03.232 ] 00:13:03.232 }' 00:13:03.232 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.232 06:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.490 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:03.490 06:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.748 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:03.748 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:04.006 [2024-07-23 06:26:16.435142] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.006 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.264 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.264 "name": "Existed_Raid", 00:13:04.264 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:04.264 "strip_size_kb": 64, 00:13:04.264 "state": "configuring", 00:13:04.264 "raid_level": "concat", 00:13:04.264 "superblock": true, 00:13:04.264 "num_base_bdevs": 3, 00:13:04.264 "num_base_bdevs_discovered": 1, 00:13:04.264 "num_base_bdevs_operational": 3, 00:13:04.264 "base_bdevs_list": [ 00:13:04.264 { 00:13:04.264 "name": null, 00:13:04.264 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:04.264 "is_configured": false, 00:13:04.264 "data_offset": 2048, 00:13:04.264 "data_size": 63488 00:13:04.264 }, 00:13:04.264 { 00:13:04.264 "name": null, 00:13:04.264 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:04.264 "is_configured": false, 00:13:04.264 "data_offset": 2048, 00:13:04.264 "data_size": 63488 00:13:04.264 }, 00:13:04.264 { 00:13:04.264 "name": "BaseBdev3", 00:13:04.264 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:04.264 "is_configured": true, 00:13:04.264 "data_offset": 2048, 00:13:04.264 "data_size": 63488 00:13:04.264 } 00:13:04.264 ] 00:13:04.264 }' 00:13:04.264 06:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.264 06:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.830 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.830 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:04.830 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:04.830 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:05.088 [2024-07-23 06:26:17.545004] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.089 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.347 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.347 "name": "Existed_Raid", 00:13:05.347 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:05.347 "strip_size_kb": 64, 00:13:05.347 "state": "configuring", 00:13:05.347 "raid_level": "concat", 00:13:05.347 "superblock": true, 00:13:05.347 "num_base_bdevs": 3, 00:13:05.347 "num_base_bdevs_discovered": 2, 00:13:05.347 "num_base_bdevs_operational": 3, 00:13:05.347 "base_bdevs_list": [ 00:13:05.347 { 00:13:05.347 "name": null, 00:13:05.347 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:05.347 "is_configured": false, 00:13:05.347 "data_offset": 2048, 00:13:05.347 "data_size": 63488 00:13:05.347 }, 00:13:05.347 { 00:13:05.347 "name": "BaseBdev2", 00:13:05.347 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:05.347 "is_configured": true, 00:13:05.347 "data_offset": 2048, 00:13:05.347 "data_size": 63488 00:13:05.347 }, 00:13:05.347 { 00:13:05.347 "name": "BaseBdev3", 00:13:05.347 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:05.347 "is_configured": true, 00:13:05.347 "data_offset": 2048, 00:13:05.347 "data_size": 63488 00:13:05.347 } 00:13:05.347 ] 00:13:05.347 }' 00:13:05.347 06:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.347 06:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.913 06:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.913 06:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:06.171 06:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:06.171 06:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.171 06:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:06.429 06:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 74e80a0e-48bc-11ef-a06c-59ddad71024c 00:13:06.687 [2024-07-23 06:26:19.001170] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:06.687 [2024-07-23 06:26:19.001232] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c3abe234a00 00:13:06.687 [2024-07-23 06:26:19.001238] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:06.687 [2024-07-23 06:26:19.001260] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c3abe297e20 00:13:06.687 [2024-07-23 06:26:19.001309] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c3abe234a00 00:13:06.687 [2024-07-23 06:26:19.001313] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3c3abe234a00 00:13:06.687 [2024-07-23 06:26:19.001334] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.687 NewBaseBdev 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:06.687 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:06.945 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.203 [ 00:13:07.203 { 00:13:07.203 "name": "NewBaseBdev", 00:13:07.203 "aliases": [ 00:13:07.203 "74e80a0e-48bc-11ef-a06c-59ddad71024c" 00:13:07.203 ], 00:13:07.203 "product_name": "Malloc disk", 00:13:07.203 "block_size": 512, 00:13:07.203 "num_blocks": 65536, 00:13:07.203 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:07.203 "assigned_rate_limits": { 00:13:07.203 "rw_ios_per_sec": 0, 00:13:07.203 "rw_mbytes_per_sec": 0, 00:13:07.203 "r_mbytes_per_sec": 0, 00:13:07.203 "w_mbytes_per_sec": 0 00:13:07.203 }, 00:13:07.203 "claimed": true, 00:13:07.203 "claim_type": "exclusive_write", 00:13:07.203 "zoned": false, 00:13:07.203 "supported_io_types": { 00:13:07.203 "read": true, 00:13:07.203 "write": true, 00:13:07.203 "unmap": true, 00:13:07.203 "flush": true, 00:13:07.203 "reset": true, 00:13:07.203 "nvme_admin": false, 00:13:07.203 "nvme_io": false, 00:13:07.203 "nvme_io_md": false, 00:13:07.203 "write_zeroes": true, 00:13:07.203 "zcopy": true, 00:13:07.203 "get_zone_info": false, 00:13:07.203 "zone_management": false, 00:13:07.203 "zone_append": false, 00:13:07.203 "compare": false, 00:13:07.203 "compare_and_write": false, 00:13:07.203 "abort": true, 00:13:07.203 "seek_hole": false, 00:13:07.203 "seek_data": false, 00:13:07.203 "copy": true, 00:13:07.203 "nvme_iov_md": false 00:13:07.203 }, 00:13:07.203 "memory_domains": [ 00:13:07.203 { 00:13:07.203 "dma_device_id": "system", 00:13:07.203 "dma_device_type": 1 00:13:07.203 }, 00:13:07.203 { 00:13:07.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.203 "dma_device_type": 2 00:13:07.203 } 00:13:07.203 ], 00:13:07.203 "driver_specific": {} 00:13:07.203 } 00:13:07.203 ] 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.203 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.474 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.474 "name": "Existed_Raid", 00:13:07.474 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:07.474 "strip_size_kb": 64, 00:13:07.474 "state": "online", 00:13:07.474 "raid_level": "concat", 00:13:07.474 "superblock": true, 00:13:07.474 "num_base_bdevs": 3, 00:13:07.474 "num_base_bdevs_discovered": 3, 00:13:07.474 "num_base_bdevs_operational": 3, 00:13:07.474 "base_bdevs_list": [ 00:13:07.474 { 00:13:07.474 "name": "NewBaseBdev", 00:13:07.474 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:07.474 "is_configured": true, 00:13:07.474 "data_offset": 2048, 00:13:07.474 "data_size": 63488 00:13:07.474 }, 00:13:07.474 { 00:13:07.474 "name": "BaseBdev2", 00:13:07.474 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:07.474 "is_configured": true, 00:13:07.474 "data_offset": 2048, 00:13:07.474 "data_size": 63488 00:13:07.474 }, 00:13:07.474 { 00:13:07.474 "name": "BaseBdev3", 00:13:07.474 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:07.474 "is_configured": true, 00:13:07.474 "data_offset": 2048, 00:13:07.474 "data_size": 63488 00:13:07.474 } 00:13:07.474 ] 00:13:07.474 }' 00:13:07.474 06:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.474 06:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:07.732 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:08.000 [2024-07-23 06:26:20.381119] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.000 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:08.000 "name": "Existed_Raid", 00:13:08.000 "aliases": [ 00:13:08.000 "73cc378f-48bc-11ef-a06c-59ddad71024c" 00:13:08.000 ], 00:13:08.000 "product_name": "Raid Volume", 00:13:08.000 "block_size": 512, 00:13:08.000 "num_blocks": 190464, 00:13:08.000 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:08.000 "assigned_rate_limits": { 00:13:08.000 "rw_ios_per_sec": 0, 00:13:08.000 "rw_mbytes_per_sec": 0, 00:13:08.000 "r_mbytes_per_sec": 0, 00:13:08.000 "w_mbytes_per_sec": 0 00:13:08.000 }, 00:13:08.000 "claimed": false, 00:13:08.000 "zoned": false, 00:13:08.000 "supported_io_types": { 00:13:08.000 "read": true, 00:13:08.000 "write": true, 00:13:08.000 "unmap": true, 00:13:08.000 "flush": true, 00:13:08.000 "reset": true, 00:13:08.000 "nvme_admin": false, 00:13:08.000 "nvme_io": false, 00:13:08.000 "nvme_io_md": false, 00:13:08.000 "write_zeroes": true, 00:13:08.000 "zcopy": false, 00:13:08.000 "get_zone_info": false, 00:13:08.000 "zone_management": false, 00:13:08.000 "zone_append": false, 00:13:08.000 "compare": false, 00:13:08.000 "compare_and_write": false, 00:13:08.000 "abort": false, 00:13:08.000 "seek_hole": false, 00:13:08.000 "seek_data": false, 00:13:08.000 "copy": false, 00:13:08.000 "nvme_iov_md": false 00:13:08.000 }, 00:13:08.000 "memory_domains": [ 00:13:08.000 { 00:13:08.000 "dma_device_id": "system", 00:13:08.000 "dma_device_type": 1 00:13:08.000 }, 00:13:08.000 { 00:13:08.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.000 "dma_device_type": 2 00:13:08.000 }, 00:13:08.000 { 00:13:08.000 "dma_device_id": "system", 00:13:08.000 "dma_device_type": 1 00:13:08.000 }, 00:13:08.000 { 00:13:08.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.000 "dma_device_type": 2 00:13:08.000 }, 00:13:08.000 { 00:13:08.000 "dma_device_id": "system", 00:13:08.000 "dma_device_type": 1 00:13:08.000 }, 00:13:08.000 { 00:13:08.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.000 "dma_device_type": 2 00:13:08.000 } 00:13:08.000 ], 00:13:08.000 "driver_specific": { 00:13:08.000 "raid": { 00:13:08.000 "uuid": "73cc378f-48bc-11ef-a06c-59ddad71024c", 00:13:08.000 "strip_size_kb": 64, 00:13:08.000 "state": "online", 00:13:08.000 "raid_level": "concat", 00:13:08.000 "superblock": true, 00:13:08.000 "num_base_bdevs": 3, 00:13:08.000 "num_base_bdevs_discovered": 3, 00:13:08.000 "num_base_bdevs_operational": 3, 00:13:08.000 "base_bdevs_list": [ 00:13:08.000 { 00:13:08.000 "name": "NewBaseBdev", 00:13:08.000 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:08.000 "is_configured": true, 00:13:08.001 "data_offset": 2048, 00:13:08.001 "data_size": 63488 00:13:08.001 }, 00:13:08.001 { 00:13:08.001 "name": "BaseBdev2", 00:13:08.001 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:08.001 "is_configured": true, 00:13:08.001 "data_offset": 2048, 00:13:08.001 "data_size": 63488 00:13:08.001 }, 00:13:08.001 { 00:13:08.001 "name": "BaseBdev3", 00:13:08.001 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:08.001 "is_configured": true, 00:13:08.001 "data_offset": 2048, 00:13:08.001 "data_size": 63488 00:13:08.001 } 00:13:08.001 ] 00:13:08.001 } 00:13:08.001 } 00:13:08.001 }' 00:13:08.001 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.001 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:08.001 BaseBdev2 00:13:08.001 BaseBdev3' 00:13:08.001 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:08.001 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:08.001 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.259 "name": "NewBaseBdev", 00:13:08.259 "aliases": [ 00:13:08.259 "74e80a0e-48bc-11ef-a06c-59ddad71024c" 00:13:08.259 ], 00:13:08.259 "product_name": "Malloc disk", 00:13:08.259 "block_size": 512, 00:13:08.259 "num_blocks": 65536, 00:13:08.259 "uuid": "74e80a0e-48bc-11ef-a06c-59ddad71024c", 00:13:08.259 "assigned_rate_limits": { 00:13:08.259 "rw_ios_per_sec": 0, 00:13:08.259 "rw_mbytes_per_sec": 0, 00:13:08.259 "r_mbytes_per_sec": 0, 00:13:08.259 "w_mbytes_per_sec": 0 00:13:08.259 }, 00:13:08.259 "claimed": true, 00:13:08.259 "claim_type": "exclusive_write", 00:13:08.259 "zoned": false, 00:13:08.259 "supported_io_types": { 00:13:08.259 "read": true, 00:13:08.259 "write": true, 00:13:08.259 "unmap": true, 00:13:08.259 "flush": true, 00:13:08.259 "reset": true, 00:13:08.259 "nvme_admin": false, 00:13:08.259 "nvme_io": false, 00:13:08.259 "nvme_io_md": false, 00:13:08.259 "write_zeroes": true, 00:13:08.259 "zcopy": true, 00:13:08.259 "get_zone_info": false, 00:13:08.259 "zone_management": false, 00:13:08.259 "zone_append": false, 00:13:08.259 "compare": false, 00:13:08.259 "compare_and_write": false, 00:13:08.259 "abort": true, 00:13:08.259 "seek_hole": false, 00:13:08.259 "seek_data": false, 00:13:08.259 "copy": true, 00:13:08.259 "nvme_iov_md": false 00:13:08.259 }, 00:13:08.259 "memory_domains": [ 00:13:08.259 { 00:13:08.259 "dma_device_id": "system", 00:13:08.259 "dma_device_type": 1 00:13:08.259 }, 00:13:08.259 { 00:13:08.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.259 "dma_device_type": 2 00:13:08.259 } 00:13:08.259 ], 00:13:08.259 "driver_specific": {} 00:13:08.259 }' 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.259 06:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.827 "name": "BaseBdev2", 00:13:08.827 "aliases": [ 00:13:08.827 "72f2519b-48bc-11ef-a06c-59ddad71024c" 00:13:08.827 ], 00:13:08.827 "product_name": "Malloc disk", 00:13:08.827 "block_size": 512, 00:13:08.827 "num_blocks": 65536, 00:13:08.827 "uuid": "72f2519b-48bc-11ef-a06c-59ddad71024c", 00:13:08.827 "assigned_rate_limits": { 00:13:08.827 "rw_ios_per_sec": 0, 00:13:08.827 "rw_mbytes_per_sec": 0, 00:13:08.827 "r_mbytes_per_sec": 0, 00:13:08.827 "w_mbytes_per_sec": 0 00:13:08.827 }, 00:13:08.827 "claimed": true, 00:13:08.827 "claim_type": "exclusive_write", 00:13:08.827 "zoned": false, 00:13:08.827 "supported_io_types": { 00:13:08.827 "read": true, 00:13:08.827 "write": true, 00:13:08.827 "unmap": true, 00:13:08.827 "flush": true, 00:13:08.827 "reset": true, 00:13:08.827 "nvme_admin": false, 00:13:08.827 "nvme_io": false, 00:13:08.827 "nvme_io_md": false, 00:13:08.827 "write_zeroes": true, 00:13:08.827 "zcopy": true, 00:13:08.827 "get_zone_info": false, 00:13:08.827 "zone_management": false, 00:13:08.827 "zone_append": false, 00:13:08.827 "compare": false, 00:13:08.827 "compare_and_write": false, 00:13:08.827 "abort": true, 00:13:08.827 "seek_hole": false, 00:13:08.827 "seek_data": false, 00:13:08.827 "copy": true, 00:13:08.827 "nvme_iov_md": false 00:13:08.827 }, 00:13:08.827 "memory_domains": [ 00:13:08.827 { 00:13:08.827 "dma_device_id": "system", 00:13:08.827 "dma_device_type": 1 00:13:08.827 }, 00:13:08.827 { 00:13:08.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.827 "dma_device_type": 2 00:13:08.827 } 00:13:08.827 ], 00:13:08.827 "driver_specific": {} 00:13:08.827 }' 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.827 "name": "BaseBdev3", 00:13:08.827 "aliases": [ 00:13:08.827 "7363d87c-48bc-11ef-a06c-59ddad71024c" 00:13:08.827 ], 00:13:08.827 "product_name": "Malloc disk", 00:13:08.827 "block_size": 512, 00:13:08.827 "num_blocks": 65536, 00:13:08.827 "uuid": "7363d87c-48bc-11ef-a06c-59ddad71024c", 00:13:08.827 "assigned_rate_limits": { 00:13:08.827 "rw_ios_per_sec": 0, 00:13:08.827 "rw_mbytes_per_sec": 0, 00:13:08.827 "r_mbytes_per_sec": 0, 00:13:08.827 "w_mbytes_per_sec": 0 00:13:08.827 }, 00:13:08.827 "claimed": true, 00:13:08.827 "claim_type": "exclusive_write", 00:13:08.827 "zoned": false, 00:13:08.827 "supported_io_types": { 00:13:08.827 "read": true, 00:13:08.827 "write": true, 00:13:08.827 "unmap": true, 00:13:08.827 "flush": true, 00:13:08.827 "reset": true, 00:13:08.827 "nvme_admin": false, 00:13:08.827 "nvme_io": false, 00:13:08.827 "nvme_io_md": false, 00:13:08.827 "write_zeroes": true, 00:13:08.827 "zcopy": true, 00:13:08.827 "get_zone_info": false, 00:13:08.827 "zone_management": false, 00:13:08.827 "zone_append": false, 00:13:08.827 "compare": false, 00:13:08.827 "compare_and_write": false, 00:13:08.827 "abort": true, 00:13:08.827 "seek_hole": false, 00:13:08.827 "seek_data": false, 00:13:08.827 "copy": true, 00:13:08.827 "nvme_iov_md": false 00:13:08.827 }, 00:13:08.827 "memory_domains": [ 00:13:08.827 { 00:13:08.827 "dma_device_id": "system", 00:13:08.827 "dma_device_type": 1 00:13:08.827 }, 00:13:08.827 { 00:13:08.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.827 "dma_device_type": 2 00:13:08.827 } 00:13:08.827 ], 00:13:08.827 "driver_specific": {} 00:13:08.827 }' 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.827 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:09.086 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:09.345 [2024-07-23 06:26:21.649099] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.345 [2024-07-23 06:26:21.649128] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.345 [2024-07-23 06:26:21.649152] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.345 [2024-07-23 06:26:21.649166] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.345 [2024-07-23 06:26:21.649170] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3abe234a00 name Existed_Raid, state offline 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54797 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54797 ']' 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54797 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54797 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:09.345 killing process with pid 54797 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54797' 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54797 00:13:09.345 [2024-07-23 06:26:21.674564] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.345 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54797 00:13:09.345 [2024-07-23 06:26:21.692083] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.603 06:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:09.603 00:13:09.603 real 0m24.509s 00:13:09.603 user 0m44.880s 00:13:09.603 sys 0m3.290s 00:13:09.603 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.603 06:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.603 ************************************ 00:13:09.603 END TEST raid_state_function_test_sb 00:13:09.603 ************************************ 00:13:09.603 06:26:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:09.603 06:26:21 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:09.603 06:26:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:09.603 06:26:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.603 06:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.603 ************************************ 00:13:09.603 START TEST raid_superblock_test 00:13:09.603 ************************************ 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:09.603 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55525 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55525 /var/tmp/spdk-raid.sock 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55525 ']' 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.604 06:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.604 [2024-07-23 06:26:21.925680] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:09.604 [2024-07-23 06:26:21.925927] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:10.169 EAL: TSC is not safe to use in SMP mode 00:13:10.169 EAL: TSC is not invariant 00:13:10.169 [2024-07-23 06:26:22.445167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.169 [2024-07-23 06:26:22.530324] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:10.169 [2024-07-23 06:26:22.532447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.169 [2024-07-23 06:26:22.533218] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.169 [2024-07-23 06:26:22.533234] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.736 06:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:10.736 malloc1 00:13:10.736 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.301 [2024-07-23 06:26:23.516701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.301 [2024-07-23 06:26:23.516763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.301 [2024-07-23 06:26:23.516775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd034780 00:13:11.301 [2024-07-23 06:26:23.516784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.301 [2024-07-23 06:26:23.517684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.301 [2024-07-23 06:26:23.517710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.301 pt1 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:11.301 malloc2 00:13:11.301 06:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.561 [2024-07-23 06:26:24.060717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.561 [2024-07-23 06:26:24.060771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.561 [2024-07-23 06:26:24.060783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd034c80 00:13:11.561 [2024-07-23 06:26:24.060792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.561 [2024-07-23 06:26:24.061479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.561 [2024-07-23 06:26:24.061505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.561 pt2 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.561 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:11.819 malloc3 00:13:11.819 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.077 [2024-07-23 06:26:24.576727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.077 [2024-07-23 06:26:24.576782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.077 [2024-07-23 06:26:24.576795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd035180 00:13:12.077 [2024-07-23 06:26:24.576804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.077 [2024-07-23 06:26:24.577472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.077 [2024-07-23 06:26:24.577495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.077 pt3 00:13:12.077 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:12.077 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:12.077 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:12.336 [2024-07-23 06:26:24.820743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.336 [2024-07-23 06:26:24.821320] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.336 [2024-07-23 06:26:24.821344] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.336 [2024-07-23 06:26:24.821397] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1218cd035400 00:13:12.336 [2024-07-23 06:26:24.821404] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:12.336 [2024-07-23 06:26:24.821437] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1218cd097e20 00:13:12.336 [2024-07-23 06:26:24.821520] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1218cd035400 00:13:12.336 [2024-07-23 06:26:24.821525] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1218cd035400 00:13:12.336 [2024-07-23 06:26:24.821553] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.336 06:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.594 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.594 "name": "raid_bdev1", 00:13:12.594 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:12.594 "strip_size_kb": 64, 00:13:12.594 "state": "online", 00:13:12.594 "raid_level": "concat", 00:13:12.594 "superblock": true, 00:13:12.594 "num_base_bdevs": 3, 00:13:12.594 "num_base_bdevs_discovered": 3, 00:13:12.594 "num_base_bdevs_operational": 3, 00:13:12.594 "base_bdevs_list": [ 00:13:12.594 { 00:13:12.594 "name": "pt1", 00:13:12.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.594 "is_configured": true, 00:13:12.594 "data_offset": 2048, 00:13:12.594 "data_size": 63488 00:13:12.594 }, 00:13:12.594 { 00:13:12.594 "name": "pt2", 00:13:12.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.594 "is_configured": true, 00:13:12.594 "data_offset": 2048, 00:13:12.594 "data_size": 63488 00:13:12.594 }, 00:13:12.594 { 00:13:12.594 "name": "pt3", 00:13:12.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.594 "is_configured": true, 00:13:12.594 "data_offset": 2048, 00:13:12.594 "data_size": 63488 00:13:12.594 } 00:13:12.594 ] 00:13:12.594 }' 00:13:12.594 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.594 06:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:13.174 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:13.443 [2024-07-23 06:26:25.708802] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.443 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:13.443 "name": "raid_bdev1", 00:13:13.443 "aliases": [ 00:13:13.443 "7c4c63e1-48bc-11ef-a06c-59ddad71024c" 00:13:13.443 ], 00:13:13.443 "product_name": "Raid Volume", 00:13:13.443 "block_size": 512, 00:13:13.443 "num_blocks": 190464, 00:13:13.443 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:13.443 "assigned_rate_limits": { 00:13:13.443 "rw_ios_per_sec": 0, 00:13:13.443 "rw_mbytes_per_sec": 0, 00:13:13.443 "r_mbytes_per_sec": 0, 00:13:13.443 "w_mbytes_per_sec": 0 00:13:13.443 }, 00:13:13.443 "claimed": false, 00:13:13.443 "zoned": false, 00:13:13.443 "supported_io_types": { 00:13:13.443 "read": true, 00:13:13.443 "write": true, 00:13:13.443 "unmap": true, 00:13:13.443 "flush": true, 00:13:13.443 "reset": true, 00:13:13.443 "nvme_admin": false, 00:13:13.443 "nvme_io": false, 00:13:13.443 "nvme_io_md": false, 00:13:13.443 "write_zeroes": true, 00:13:13.443 "zcopy": false, 00:13:13.443 "get_zone_info": false, 00:13:13.443 "zone_management": false, 00:13:13.443 "zone_append": false, 00:13:13.443 "compare": false, 00:13:13.443 "compare_and_write": false, 00:13:13.443 "abort": false, 00:13:13.443 "seek_hole": false, 00:13:13.443 "seek_data": false, 00:13:13.443 "copy": false, 00:13:13.443 "nvme_iov_md": false 00:13:13.443 }, 00:13:13.443 "memory_domains": [ 00:13:13.443 { 00:13:13.443 "dma_device_id": "system", 00:13:13.443 "dma_device_type": 1 00:13:13.443 }, 00:13:13.443 { 00:13:13.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.443 "dma_device_type": 2 00:13:13.443 }, 00:13:13.443 { 00:13:13.443 "dma_device_id": "system", 00:13:13.443 "dma_device_type": 1 00:13:13.443 }, 00:13:13.443 { 00:13:13.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.443 "dma_device_type": 2 00:13:13.443 }, 00:13:13.443 { 00:13:13.443 "dma_device_id": "system", 00:13:13.443 "dma_device_type": 1 00:13:13.443 }, 00:13:13.443 { 00:13:13.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.443 "dma_device_type": 2 00:13:13.443 } 00:13:13.444 ], 00:13:13.444 "driver_specific": { 00:13:13.444 "raid": { 00:13:13.444 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:13.444 "strip_size_kb": 64, 00:13:13.444 "state": "online", 00:13:13.444 "raid_level": "concat", 00:13:13.444 "superblock": true, 00:13:13.444 "num_base_bdevs": 3, 00:13:13.444 "num_base_bdevs_discovered": 3, 00:13:13.444 "num_base_bdevs_operational": 3, 00:13:13.444 "base_bdevs_list": [ 00:13:13.444 { 00:13:13.444 "name": "pt1", 00:13:13.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.444 "is_configured": true, 00:13:13.444 "data_offset": 2048, 00:13:13.444 "data_size": 63488 00:13:13.444 }, 00:13:13.444 { 00:13:13.444 "name": "pt2", 00:13:13.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.444 "is_configured": true, 00:13:13.444 "data_offset": 2048, 00:13:13.444 "data_size": 63488 00:13:13.444 }, 00:13:13.444 { 00:13:13.444 "name": "pt3", 00:13:13.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.444 "is_configured": true, 00:13:13.444 "data_offset": 2048, 00:13:13.444 "data_size": 63488 00:13:13.444 } 00:13:13.444 ] 00:13:13.444 } 00:13:13.444 } 00:13:13.444 }' 00:13:13.444 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.444 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:13.444 pt2 00:13:13.444 pt3' 00:13:13.444 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:13.444 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:13.444 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:13.702 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:13.702 "name": "pt1", 00:13:13.702 "aliases": [ 00:13:13.702 "00000000-0000-0000-0000-000000000001" 00:13:13.702 ], 00:13:13.702 "product_name": "passthru", 00:13:13.702 "block_size": 512, 00:13:13.702 "num_blocks": 65536, 00:13:13.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.702 "assigned_rate_limits": { 00:13:13.702 "rw_ios_per_sec": 0, 00:13:13.702 "rw_mbytes_per_sec": 0, 00:13:13.702 "r_mbytes_per_sec": 0, 00:13:13.702 "w_mbytes_per_sec": 0 00:13:13.702 }, 00:13:13.702 "claimed": true, 00:13:13.702 "claim_type": "exclusive_write", 00:13:13.702 "zoned": false, 00:13:13.702 "supported_io_types": { 00:13:13.702 "read": true, 00:13:13.702 "write": true, 00:13:13.702 "unmap": true, 00:13:13.702 "flush": true, 00:13:13.702 "reset": true, 00:13:13.702 "nvme_admin": false, 00:13:13.702 "nvme_io": false, 00:13:13.702 "nvme_io_md": false, 00:13:13.702 "write_zeroes": true, 00:13:13.702 "zcopy": true, 00:13:13.702 "get_zone_info": false, 00:13:13.702 "zone_management": false, 00:13:13.702 "zone_append": false, 00:13:13.702 "compare": false, 00:13:13.702 "compare_and_write": false, 00:13:13.702 "abort": true, 00:13:13.702 "seek_hole": false, 00:13:13.702 "seek_data": false, 00:13:13.702 "copy": true, 00:13:13.702 "nvme_iov_md": false 00:13:13.702 }, 00:13:13.702 "memory_domains": [ 00:13:13.702 { 00:13:13.702 "dma_device_id": "system", 00:13:13.702 "dma_device_type": 1 00:13:13.702 }, 00:13:13.702 { 00:13:13.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.703 "dma_device_type": 2 00:13:13.703 } 00:13:13.703 ], 00:13:13.703 "driver_specific": { 00:13:13.703 "passthru": { 00:13:13.703 "name": "pt1", 00:13:13.703 "base_bdev_name": "malloc1" 00:13:13.703 } 00:13:13.703 } 00:13:13.703 }' 00:13:13.703 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.703 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.703 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:13.703 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.703 06:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:13.703 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:13.961 "name": "pt2", 00:13:13.961 "aliases": [ 00:13:13.961 "00000000-0000-0000-0000-000000000002" 00:13:13.961 ], 00:13:13.961 "product_name": "passthru", 00:13:13.961 "block_size": 512, 00:13:13.961 "num_blocks": 65536, 00:13:13.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.961 "assigned_rate_limits": { 00:13:13.961 "rw_ios_per_sec": 0, 00:13:13.961 "rw_mbytes_per_sec": 0, 00:13:13.961 "r_mbytes_per_sec": 0, 00:13:13.961 "w_mbytes_per_sec": 0 00:13:13.961 }, 00:13:13.961 "claimed": true, 00:13:13.961 "claim_type": "exclusive_write", 00:13:13.961 "zoned": false, 00:13:13.961 "supported_io_types": { 00:13:13.961 "read": true, 00:13:13.961 "write": true, 00:13:13.961 "unmap": true, 00:13:13.961 "flush": true, 00:13:13.961 "reset": true, 00:13:13.961 "nvme_admin": false, 00:13:13.961 "nvme_io": false, 00:13:13.961 "nvme_io_md": false, 00:13:13.961 "write_zeroes": true, 00:13:13.961 "zcopy": true, 00:13:13.961 "get_zone_info": false, 00:13:13.961 "zone_management": false, 00:13:13.961 "zone_append": false, 00:13:13.961 "compare": false, 00:13:13.961 "compare_and_write": false, 00:13:13.961 "abort": true, 00:13:13.961 "seek_hole": false, 00:13:13.961 "seek_data": false, 00:13:13.961 "copy": true, 00:13:13.961 "nvme_iov_md": false 00:13:13.961 }, 00:13:13.961 "memory_domains": [ 00:13:13.961 { 00:13:13.961 "dma_device_id": "system", 00:13:13.961 "dma_device_type": 1 00:13:13.961 }, 00:13:13.961 { 00:13:13.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.961 "dma_device_type": 2 00:13:13.961 } 00:13:13.961 ], 00:13:13.961 "driver_specific": { 00:13:13.961 "passthru": { 00:13:13.961 "name": "pt2", 00:13:13.961 "base_bdev_name": "malloc2" 00:13:13.961 } 00:13:13.961 } 00:13:13.961 }' 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:13.961 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:14.219 "name": "pt3", 00:13:14.219 "aliases": [ 00:13:14.219 "00000000-0000-0000-0000-000000000003" 00:13:14.219 ], 00:13:14.219 "product_name": "passthru", 00:13:14.219 "block_size": 512, 00:13:14.219 "num_blocks": 65536, 00:13:14.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.219 "assigned_rate_limits": { 00:13:14.219 "rw_ios_per_sec": 0, 00:13:14.219 "rw_mbytes_per_sec": 0, 00:13:14.219 "r_mbytes_per_sec": 0, 00:13:14.219 "w_mbytes_per_sec": 0 00:13:14.219 }, 00:13:14.219 "claimed": true, 00:13:14.219 "claim_type": "exclusive_write", 00:13:14.219 "zoned": false, 00:13:14.219 "supported_io_types": { 00:13:14.219 "read": true, 00:13:14.219 "write": true, 00:13:14.219 "unmap": true, 00:13:14.219 "flush": true, 00:13:14.219 "reset": true, 00:13:14.219 "nvme_admin": false, 00:13:14.219 "nvme_io": false, 00:13:14.219 "nvme_io_md": false, 00:13:14.219 "write_zeroes": true, 00:13:14.219 "zcopy": true, 00:13:14.219 "get_zone_info": false, 00:13:14.219 "zone_management": false, 00:13:14.219 "zone_append": false, 00:13:14.219 "compare": false, 00:13:14.219 "compare_and_write": false, 00:13:14.219 "abort": true, 00:13:14.219 "seek_hole": false, 00:13:14.219 "seek_data": false, 00:13:14.219 "copy": true, 00:13:14.219 "nvme_iov_md": false 00:13:14.219 }, 00:13:14.219 "memory_domains": [ 00:13:14.219 { 00:13:14.219 "dma_device_id": "system", 00:13:14.219 "dma_device_type": 1 00:13:14.219 }, 00:13:14.219 { 00:13:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.219 "dma_device_type": 2 00:13:14.219 } 00:13:14.219 ], 00:13:14.219 "driver_specific": { 00:13:14.219 "passthru": { 00:13:14.219 "name": "pt3", 00:13:14.219 "base_bdev_name": "malloc3" 00:13:14.219 } 00:13:14.219 } 00:13:14.219 }' 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:14.219 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:14.480 [2024-07-23 06:26:26.940823] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.480 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7c4c63e1-48bc-11ef-a06c-59ddad71024c 00:13:14.480 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7c4c63e1-48bc-11ef-a06c-59ddad71024c ']' 00:13:14.480 06:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:14.737 [2024-07-23 06:26:27.184778] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.737 [2024-07-23 06:26:27.184801] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.737 [2024-07-23 06:26:27.184825] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.737 [2024-07-23 06:26:27.184841] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.737 [2024-07-23 06:26:27.184845] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1218cd035400 name raid_bdev1, state offline 00:13:14.737 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.737 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:14.995 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:14.995 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:14.995 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:14.995 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:15.264 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.264 06:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:15.522 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.522 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:15.779 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:15.779 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:16.037 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:16.604 [2024-07-23 06:26:28.828839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:16.604 [2024-07-23 06:26:28.829424] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:16.604 [2024-07-23 06:26:28.829443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:16.604 [2024-07-23 06:26:28.829457] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:16.604 [2024-07-23 06:26:28.829488] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:16.604 [2024-07-23 06:26:28.829499] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:16.604 [2024-07-23 06:26:28.829508] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.604 [2024-07-23 06:26:28.829513] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1218cd035180 name raid_bdev1, state configuring 00:13:16.604 request: 00:13:16.604 { 00:13:16.604 "name": "raid_bdev1", 00:13:16.604 "raid_level": "concat", 00:13:16.604 "base_bdevs": [ 00:13:16.604 "malloc1", 00:13:16.604 "malloc2", 00:13:16.604 "malloc3" 00:13:16.604 ], 00:13:16.604 "strip_size_kb": 64, 00:13:16.604 "superblock": false, 00:13:16.604 "method": "bdev_raid_create", 00:13:16.604 "req_id": 1 00:13:16.604 } 00:13:16.604 Got JSON-RPC error response 00:13:16.604 response: 00:13:16.604 { 00:13:16.604 "code": -17, 00:13:16.604 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:16.604 } 00:13:16.604 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:16.604 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:16.604 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:16.604 06:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:16.604 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.604 06:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:16.604 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:16.604 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:16.604 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:16.862 [2024-07-23 06:26:29.316845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:16.862 [2024-07-23 06:26:29.316898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.862 [2024-07-23 06:26:29.316910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd034c80 00:13:16.862 [2024-07-23 06:26:29.316919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.862 [2024-07-23 06:26:29.317566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.862 [2024-07-23 06:26:29.317593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:16.862 [2024-07-23 06:26:29.317648] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:16.862 [2024-07-23 06:26:29.317661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:16.862 pt1 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.862 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.863 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.863 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.120 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.120 "name": "raid_bdev1", 00:13:17.121 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:17.121 "strip_size_kb": 64, 00:13:17.121 "state": "configuring", 00:13:17.121 "raid_level": "concat", 00:13:17.121 "superblock": true, 00:13:17.121 "num_base_bdevs": 3, 00:13:17.121 "num_base_bdevs_discovered": 1, 00:13:17.121 "num_base_bdevs_operational": 3, 00:13:17.121 "base_bdevs_list": [ 00:13:17.121 { 00:13:17.121 "name": "pt1", 00:13:17.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.121 "is_configured": true, 00:13:17.121 "data_offset": 2048, 00:13:17.121 "data_size": 63488 00:13:17.121 }, 00:13:17.121 { 00:13:17.121 "name": null, 00:13:17.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.121 "is_configured": false, 00:13:17.121 "data_offset": 2048, 00:13:17.121 "data_size": 63488 00:13:17.121 }, 00:13:17.121 { 00:13:17.121 "name": null, 00:13:17.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.121 "is_configured": false, 00:13:17.121 "data_offset": 2048, 00:13:17.121 "data_size": 63488 00:13:17.121 } 00:13:17.121 ] 00:13:17.121 }' 00:13:17.121 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.121 06:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.415 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:17.415 06:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:17.680 [2024-07-23 06:26:30.132867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:17.680 [2024-07-23 06:26:30.132921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.680 [2024-07-23 06:26:30.132934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd035680 00:13:17.680 [2024-07-23 06:26:30.132942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.680 [2024-07-23 06:26:30.133062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.680 [2024-07-23 06:26:30.133083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:17.680 [2024-07-23 06:26:30.133108] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:17.680 [2024-07-23 06:26:30.133117] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:17.680 pt2 00:13:17.680 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:17.938 [2024-07-23 06:26:30.428880] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.938 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.196 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.196 "name": "raid_bdev1", 00:13:18.196 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:18.196 "strip_size_kb": 64, 00:13:18.196 "state": "configuring", 00:13:18.196 "raid_level": "concat", 00:13:18.196 "superblock": true, 00:13:18.196 "num_base_bdevs": 3, 00:13:18.196 "num_base_bdevs_discovered": 1, 00:13:18.196 "num_base_bdevs_operational": 3, 00:13:18.196 "base_bdevs_list": [ 00:13:18.196 { 00:13:18.196 "name": "pt1", 00:13:18.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.196 "is_configured": true, 00:13:18.196 "data_offset": 2048, 00:13:18.196 "data_size": 63488 00:13:18.196 }, 00:13:18.196 { 00:13:18.196 "name": null, 00:13:18.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.196 "is_configured": false, 00:13:18.196 "data_offset": 2048, 00:13:18.196 "data_size": 63488 00:13:18.196 }, 00:13:18.196 { 00:13:18.196 "name": null, 00:13:18.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.196 "is_configured": false, 00:13:18.196 "data_offset": 2048, 00:13:18.196 "data_size": 63488 00:13:18.196 } 00:13:18.196 ] 00:13:18.196 }' 00:13:18.196 06:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.196 06:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.763 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:18.763 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:18.763 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:19.021 [2024-07-23 06:26:31.304901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:19.021 [2024-07-23 06:26:31.304963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.021 [2024-07-23 06:26:31.304976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd035680 00:13:19.021 [2024-07-23 06:26:31.304985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.021 [2024-07-23 06:26:31.305102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.021 [2024-07-23 06:26:31.305114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:19.021 [2024-07-23 06:26:31.305139] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:19.021 [2024-07-23 06:26:31.305148] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:19.021 pt2 00:13:19.021 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:19.021 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:19.021 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:19.279 [2024-07-23 06:26:31.560912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:19.279 [2024-07-23 06:26:31.560978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.279 [2024-07-23 06:26:31.560994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1218cd035400 00:13:19.280 [2024-07-23 06:26:31.561005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.280 [2024-07-23 06:26:31.561147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.280 [2024-07-23 06:26:31.561169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:19.280 [2024-07-23 06:26:31.561198] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:19.280 [2024-07-23 06:26:31.561210] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:19.280 [2024-07-23 06:26:31.561243] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1218cd034780 00:13:19.280 [2024-07-23 06:26:31.561248] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:19.280 [2024-07-23 06:26:31.561283] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1218cd097e20 00:13:19.280 [2024-07-23 06:26:31.561349] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1218cd034780 00:13:19.280 [2024-07-23 06:26:31.561355] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1218cd034780 00:13:19.280 [2024-07-23 06:26:31.561381] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.280 pt3 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.280 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.580 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.580 "name": "raid_bdev1", 00:13:19.580 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:19.580 "strip_size_kb": 64, 00:13:19.580 "state": "online", 00:13:19.580 "raid_level": "concat", 00:13:19.580 "superblock": true, 00:13:19.580 "num_base_bdevs": 3, 00:13:19.580 "num_base_bdevs_discovered": 3, 00:13:19.580 "num_base_bdevs_operational": 3, 00:13:19.580 "base_bdevs_list": [ 00:13:19.580 { 00:13:19.580 "name": "pt1", 00:13:19.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.581 "is_configured": true, 00:13:19.581 "data_offset": 2048, 00:13:19.581 "data_size": 63488 00:13:19.581 }, 00:13:19.581 { 00:13:19.581 "name": "pt2", 00:13:19.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.581 "is_configured": true, 00:13:19.581 "data_offset": 2048, 00:13:19.581 "data_size": 63488 00:13:19.581 }, 00:13:19.581 { 00:13:19.581 "name": "pt3", 00:13:19.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.581 "is_configured": true, 00:13:19.581 "data_offset": 2048, 00:13:19.581 "data_size": 63488 00:13:19.581 } 00:13:19.581 ] 00:13:19.581 }' 00:13:19.581 06:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.581 06:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:19.839 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:20.097 [2024-07-23 06:26:32.440981] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.097 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:20.097 "name": "raid_bdev1", 00:13:20.097 "aliases": [ 00:13:20.097 "7c4c63e1-48bc-11ef-a06c-59ddad71024c" 00:13:20.097 ], 00:13:20.097 "product_name": "Raid Volume", 00:13:20.097 "block_size": 512, 00:13:20.097 "num_blocks": 190464, 00:13:20.097 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:20.097 "assigned_rate_limits": { 00:13:20.097 "rw_ios_per_sec": 0, 00:13:20.097 "rw_mbytes_per_sec": 0, 00:13:20.097 "r_mbytes_per_sec": 0, 00:13:20.097 "w_mbytes_per_sec": 0 00:13:20.097 }, 00:13:20.097 "claimed": false, 00:13:20.097 "zoned": false, 00:13:20.097 "supported_io_types": { 00:13:20.097 "read": true, 00:13:20.097 "write": true, 00:13:20.097 "unmap": true, 00:13:20.097 "flush": true, 00:13:20.097 "reset": true, 00:13:20.097 "nvme_admin": false, 00:13:20.097 "nvme_io": false, 00:13:20.097 "nvme_io_md": false, 00:13:20.097 "write_zeroes": true, 00:13:20.097 "zcopy": false, 00:13:20.097 "get_zone_info": false, 00:13:20.097 "zone_management": false, 00:13:20.097 "zone_append": false, 00:13:20.097 "compare": false, 00:13:20.097 "compare_and_write": false, 00:13:20.097 "abort": false, 00:13:20.097 "seek_hole": false, 00:13:20.097 "seek_data": false, 00:13:20.097 "copy": false, 00:13:20.097 "nvme_iov_md": false 00:13:20.097 }, 00:13:20.097 "memory_domains": [ 00:13:20.097 { 00:13:20.097 "dma_device_id": "system", 00:13:20.097 "dma_device_type": 1 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.097 "dma_device_type": 2 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "dma_device_id": "system", 00:13:20.097 "dma_device_type": 1 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.097 "dma_device_type": 2 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "dma_device_id": "system", 00:13:20.097 "dma_device_type": 1 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.097 "dma_device_type": 2 00:13:20.097 } 00:13:20.097 ], 00:13:20.097 "driver_specific": { 00:13:20.097 "raid": { 00:13:20.097 "uuid": "7c4c63e1-48bc-11ef-a06c-59ddad71024c", 00:13:20.097 "strip_size_kb": 64, 00:13:20.097 "state": "online", 00:13:20.097 "raid_level": "concat", 00:13:20.097 "superblock": true, 00:13:20.097 "num_base_bdevs": 3, 00:13:20.097 "num_base_bdevs_discovered": 3, 00:13:20.097 "num_base_bdevs_operational": 3, 00:13:20.097 "base_bdevs_list": [ 00:13:20.097 { 00:13:20.097 "name": "pt1", 00:13:20.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:20.097 "is_configured": true, 00:13:20.097 "data_offset": 2048, 00:13:20.097 "data_size": 63488 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "name": "pt2", 00:13:20.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.097 "is_configured": true, 00:13:20.097 "data_offset": 2048, 00:13:20.097 "data_size": 63488 00:13:20.097 }, 00:13:20.097 { 00:13:20.097 "name": "pt3", 00:13:20.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:20.097 "is_configured": true, 00:13:20.097 "data_offset": 2048, 00:13:20.097 "data_size": 63488 00:13:20.097 } 00:13:20.097 ] 00:13:20.097 } 00:13:20.097 } 00:13:20.097 }' 00:13:20.097 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.097 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:20.097 pt2 00:13:20.097 pt3' 00:13:20.097 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.097 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:20.097 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.356 "name": "pt1", 00:13:20.356 "aliases": [ 00:13:20.356 "00000000-0000-0000-0000-000000000001" 00:13:20.356 ], 00:13:20.356 "product_name": "passthru", 00:13:20.356 "block_size": 512, 00:13:20.356 "num_blocks": 65536, 00:13:20.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:20.356 "assigned_rate_limits": { 00:13:20.356 "rw_ios_per_sec": 0, 00:13:20.356 "rw_mbytes_per_sec": 0, 00:13:20.356 "r_mbytes_per_sec": 0, 00:13:20.356 "w_mbytes_per_sec": 0 00:13:20.356 }, 00:13:20.356 "claimed": true, 00:13:20.356 "claim_type": "exclusive_write", 00:13:20.356 "zoned": false, 00:13:20.356 "supported_io_types": { 00:13:20.356 "read": true, 00:13:20.356 "write": true, 00:13:20.356 "unmap": true, 00:13:20.356 "flush": true, 00:13:20.356 "reset": true, 00:13:20.356 "nvme_admin": false, 00:13:20.356 "nvme_io": false, 00:13:20.356 "nvme_io_md": false, 00:13:20.356 "write_zeroes": true, 00:13:20.356 "zcopy": true, 00:13:20.356 "get_zone_info": false, 00:13:20.356 "zone_management": false, 00:13:20.356 "zone_append": false, 00:13:20.356 "compare": false, 00:13:20.356 "compare_and_write": false, 00:13:20.356 "abort": true, 00:13:20.356 "seek_hole": false, 00:13:20.356 "seek_data": false, 00:13:20.356 "copy": true, 00:13:20.356 "nvme_iov_md": false 00:13:20.356 }, 00:13:20.356 "memory_domains": [ 00:13:20.356 { 00:13:20.356 "dma_device_id": "system", 00:13:20.356 "dma_device_type": 1 00:13:20.356 }, 00:13:20.356 { 00:13:20.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.356 "dma_device_type": 2 00:13:20.356 } 00:13:20.356 ], 00:13:20.356 "driver_specific": { 00:13:20.356 "passthru": { 00:13:20.356 "name": "pt1", 00:13:20.356 "base_bdev_name": "malloc1" 00:13:20.356 } 00:13:20.356 } 00:13:20.356 }' 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:20.356 06:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.615 "name": "pt2", 00:13:20.615 "aliases": [ 00:13:20.615 "00000000-0000-0000-0000-000000000002" 00:13:20.615 ], 00:13:20.615 "product_name": "passthru", 00:13:20.615 "block_size": 512, 00:13:20.615 "num_blocks": 65536, 00:13:20.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.615 "assigned_rate_limits": { 00:13:20.615 "rw_ios_per_sec": 0, 00:13:20.615 "rw_mbytes_per_sec": 0, 00:13:20.615 "r_mbytes_per_sec": 0, 00:13:20.615 "w_mbytes_per_sec": 0 00:13:20.615 }, 00:13:20.615 "claimed": true, 00:13:20.615 "claim_type": "exclusive_write", 00:13:20.615 "zoned": false, 00:13:20.615 "supported_io_types": { 00:13:20.615 "read": true, 00:13:20.615 "write": true, 00:13:20.615 "unmap": true, 00:13:20.615 "flush": true, 00:13:20.615 "reset": true, 00:13:20.615 "nvme_admin": false, 00:13:20.615 "nvme_io": false, 00:13:20.615 "nvme_io_md": false, 00:13:20.615 "write_zeroes": true, 00:13:20.615 "zcopy": true, 00:13:20.615 "get_zone_info": false, 00:13:20.615 "zone_management": false, 00:13:20.615 "zone_append": false, 00:13:20.615 "compare": false, 00:13:20.615 "compare_and_write": false, 00:13:20.615 "abort": true, 00:13:20.615 "seek_hole": false, 00:13:20.615 "seek_data": false, 00:13:20.615 "copy": true, 00:13:20.615 "nvme_iov_md": false 00:13:20.615 }, 00:13:20.615 "memory_domains": [ 00:13:20.615 { 00:13:20.615 "dma_device_id": "system", 00:13:20.615 "dma_device_type": 1 00:13:20.615 }, 00:13:20.615 { 00:13:20.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.615 "dma_device_type": 2 00:13:20.615 } 00:13:20.615 ], 00:13:20.615 "driver_specific": { 00:13:20.615 "passthru": { 00:13:20.615 "name": "pt2", 00:13:20.615 "base_bdev_name": "malloc2" 00:13:20.615 } 00:13:20.615 } 00:13:20.615 }' 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.615 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:20.875 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.134 "name": "pt3", 00:13:21.134 "aliases": [ 00:13:21.134 "00000000-0000-0000-0000-000000000003" 00:13:21.134 ], 00:13:21.134 "product_name": "passthru", 00:13:21.134 "block_size": 512, 00:13:21.134 "num_blocks": 65536, 00:13:21.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:21.134 "assigned_rate_limits": { 00:13:21.134 "rw_ios_per_sec": 0, 00:13:21.134 "rw_mbytes_per_sec": 0, 00:13:21.134 "r_mbytes_per_sec": 0, 00:13:21.134 "w_mbytes_per_sec": 0 00:13:21.134 }, 00:13:21.134 "claimed": true, 00:13:21.134 "claim_type": "exclusive_write", 00:13:21.134 "zoned": false, 00:13:21.134 "supported_io_types": { 00:13:21.134 "read": true, 00:13:21.134 "write": true, 00:13:21.134 "unmap": true, 00:13:21.134 "flush": true, 00:13:21.134 "reset": true, 00:13:21.134 "nvme_admin": false, 00:13:21.134 "nvme_io": false, 00:13:21.134 "nvme_io_md": false, 00:13:21.134 "write_zeroes": true, 00:13:21.134 "zcopy": true, 00:13:21.134 "get_zone_info": false, 00:13:21.134 "zone_management": false, 00:13:21.134 "zone_append": false, 00:13:21.134 "compare": false, 00:13:21.134 "compare_and_write": false, 00:13:21.134 "abort": true, 00:13:21.134 "seek_hole": false, 00:13:21.134 "seek_data": false, 00:13:21.134 "copy": true, 00:13:21.134 "nvme_iov_md": false 00:13:21.134 }, 00:13:21.134 "memory_domains": [ 00:13:21.134 { 00:13:21.134 "dma_device_id": "system", 00:13:21.134 "dma_device_type": 1 00:13:21.134 }, 00:13:21.134 { 00:13:21.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.134 "dma_device_type": 2 00:13:21.134 } 00:13:21.134 ], 00:13:21.134 "driver_specific": { 00:13:21.134 "passthru": { 00:13:21.134 "name": "pt3", 00:13:21.134 "base_bdev_name": "malloc3" 00:13:21.134 } 00:13:21.134 } 00:13:21.134 }' 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:21.134 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:21.401 [2024-07-23 06:26:33.749015] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7c4c63e1-48bc-11ef-a06c-59ddad71024c '!=' 7c4c63e1-48bc-11ef-a06c-59ddad71024c ']' 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55525 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55525 ']' 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55525 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55525 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:21.401 killing process with pid 55525 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:21.401 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55525' 00:13:21.402 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55525 00:13:21.402 [2024-07-23 06:26:33.778909] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.402 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55525 00:13:21.402 [2024-07-23 06:26:33.778937] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.402 [2024-07-23 06:26:33.778952] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.402 [2024-07-23 06:26:33.778956] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1218cd034780 name raid_bdev1, state offline 00:13:21.402 [2024-07-23 06:26:33.796278] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.659 06:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:21.659 00:13:21.659 real 0m12.054s 00:13:21.659 user 0m21.447s 00:13:21.659 sys 0m1.874s 00:13:21.659 ************************************ 00:13:21.659 END TEST raid_superblock_test 00:13:21.659 ************************************ 00:13:21.659 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.659 06:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.659 06:26:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:21.659 06:26:34 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:21.659 06:26:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:21.659 06:26:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.659 06:26:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.659 ************************************ 00:13:21.659 START TEST raid_read_error_test 00:13:21.659 ************************************ 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.bKq00UQHUl 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55880 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55880 /var/tmp/spdk-raid.sock 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55880 ']' 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:21.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.659 06:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.659 [2024-07-23 06:26:34.031215] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:21.659 [2024-07-23 06:26:34.031438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:22.223 EAL: TSC is not safe to use in SMP mode 00:13:22.223 EAL: TSC is not invariant 00:13:22.223 [2024-07-23 06:26:34.549158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.223 [2024-07-23 06:26:34.641518] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:22.223 [2024-07-23 06:26:34.643662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.223 [2024-07-23 06:26:34.644430] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.223 [2024-07-23 06:26:34.644444] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.787 06:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.787 06:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:22.787 06:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:22.787 06:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:23.044 BaseBdev1_malloc 00:13:23.044 06:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:24.016 true 00:13:24.016 06:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:24.016 [2024-07-23 06:26:35.980879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:24.016 [2024-07-23 06:26:35.980940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.016 [2024-07-23 06:26:35.980966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d230e634780 00:13:24.016 [2024-07-23 06:26:35.980974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.016 [2024-07-23 06:26:35.981620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.016 [2024-07-23 06:26:35.981644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.016 BaseBdev1 00:13:24.016 06:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:24.016 06:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.016 BaseBdev2_malloc 00:13:24.017 06:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:24.017 true 00:13:24.017 06:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:24.317 [2024-07-23 06:26:36.736896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:24.317 [2024-07-23 06:26:36.736942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.317 [2024-07-23 06:26:36.736967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d230e634c80 00:13:24.317 [2024-07-23 06:26:36.736976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.317 [2024-07-23 06:26:36.737629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.317 [2024-07-23 06:26:36.737654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.317 BaseBdev2 00:13:24.317 06:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:24.317 06:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.593 BaseBdev3_malloc 00:13:24.593 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:24.851 true 00:13:24.851 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:25.108 [2024-07-23 06:26:37.524928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:25.108 [2024-07-23 06:26:37.524980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.108 [2024-07-23 06:26:37.525007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d230e635180 00:13:25.108 [2024-07-23 06:26:37.525019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.108 [2024-07-23 06:26:37.525678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.108 [2024-07-23 06:26:37.525704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.108 BaseBdev3 00:13:25.108 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:25.366 [2024-07-23 06:26:37.804937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.366 [2024-07-23 06:26:37.805521] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.366 [2024-07-23 06:26:37.805545] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.366 [2024-07-23 06:26:37.805601] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2d230e635400 00:13:25.366 [2024-07-23 06:26:37.805607] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:25.367 [2024-07-23 06:26:37.805644] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d230e6a0e20 00:13:25.367 [2024-07-23 06:26:37.805716] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2d230e635400 00:13:25.367 [2024-07-23 06:26:37.805721] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2d230e635400 00:13:25.367 [2024-07-23 06:26:37.805755] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.367 06:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.625 06:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.625 "name": "raid_bdev1", 00:13:25.625 "uuid": "84099f06-48bc-11ef-a06c-59ddad71024c", 00:13:25.625 "strip_size_kb": 64, 00:13:25.625 "state": "online", 00:13:25.625 "raid_level": "concat", 00:13:25.625 "superblock": true, 00:13:25.625 "num_base_bdevs": 3, 00:13:25.625 "num_base_bdevs_discovered": 3, 00:13:25.625 "num_base_bdevs_operational": 3, 00:13:25.625 "base_bdevs_list": [ 00:13:25.625 { 00:13:25.625 "name": "BaseBdev1", 00:13:25.625 "uuid": "05c5d681-ed3a-dc5f-867e-34ded93c9654", 00:13:25.625 "is_configured": true, 00:13:25.625 "data_offset": 2048, 00:13:25.625 "data_size": 63488 00:13:25.625 }, 00:13:25.625 { 00:13:25.625 "name": "BaseBdev2", 00:13:25.625 "uuid": "14cb2c63-47ac-8552-bc05-6fe8b944bd37", 00:13:25.625 "is_configured": true, 00:13:25.625 "data_offset": 2048, 00:13:25.625 "data_size": 63488 00:13:25.625 }, 00:13:25.625 { 00:13:25.625 "name": "BaseBdev3", 00:13:25.625 "uuid": "062b49a7-11ee-6a57-bb3d-e28645935ed0", 00:13:25.625 "is_configured": true, 00:13:25.625 "data_offset": 2048, 00:13:25.625 "data_size": 63488 00:13:25.625 } 00:13:25.625 ] 00:13:25.625 }' 00:13:25.626 06:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.626 06:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.192 06:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:26.192 06:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:26.192 [2024-07-23 06:26:38.557158] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d230e6a0ec0 00:13:27.124 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:27.382 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:27.382 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:27.382 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:27.382 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.383 06:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.640 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.640 "name": "raid_bdev1", 00:13:27.640 "uuid": "84099f06-48bc-11ef-a06c-59ddad71024c", 00:13:27.640 "strip_size_kb": 64, 00:13:27.640 "state": "online", 00:13:27.640 "raid_level": "concat", 00:13:27.640 "superblock": true, 00:13:27.640 "num_base_bdevs": 3, 00:13:27.640 "num_base_bdevs_discovered": 3, 00:13:27.640 "num_base_bdevs_operational": 3, 00:13:27.640 "base_bdevs_list": [ 00:13:27.640 { 00:13:27.640 "name": "BaseBdev1", 00:13:27.640 "uuid": "05c5d681-ed3a-dc5f-867e-34ded93c9654", 00:13:27.640 "is_configured": true, 00:13:27.640 "data_offset": 2048, 00:13:27.640 "data_size": 63488 00:13:27.640 }, 00:13:27.640 { 00:13:27.640 "name": "BaseBdev2", 00:13:27.640 "uuid": "14cb2c63-47ac-8552-bc05-6fe8b944bd37", 00:13:27.640 "is_configured": true, 00:13:27.640 "data_offset": 2048, 00:13:27.640 "data_size": 63488 00:13:27.640 }, 00:13:27.640 { 00:13:27.640 "name": "BaseBdev3", 00:13:27.640 "uuid": "062b49a7-11ee-6a57-bb3d-e28645935ed0", 00:13:27.640 "is_configured": true, 00:13:27.640 "data_offset": 2048, 00:13:27.640 "data_size": 63488 00:13:27.640 } 00:13:27.640 ] 00:13:27.640 }' 00:13:27.640 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.640 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.897 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:28.463 [2024-07-23 06:26:40.703029] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.463 [2024-07-23 06:26:40.703059] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.463 [2024-07-23 06:26:40.703380] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.463 [2024-07-23 06:26:40.703390] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.463 [2024-07-23 06:26:40.703398] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.463 [2024-07-23 06:26:40.703402] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d230e635400 name raid_bdev1, state offline 00:13:28.463 0 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55880 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55880 ']' 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55880 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55880 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:28.463 killing process with pid 55880 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55880' 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55880 00:13:28.463 [2024-07-23 06:26:40.732386] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55880 00:13:28.463 [2024-07-23 06:26:40.749408] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.bKq00UQHUl 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:13:28.463 00:13:28.463 real 0m6.918s 00:13:28.463 user 0m10.875s 00:13:28.463 sys 0m1.212s 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.463 ************************************ 00:13:28.463 END TEST raid_read_error_test 00:13:28.463 ************************************ 00:13:28.463 06:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.463 06:26:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:28.463 06:26:40 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:28.463 06:26:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:28.463 06:26:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.463 06:26:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.463 ************************************ 00:13:28.464 START TEST raid_write_error_test 00:13:28.464 ************************************ 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jtPy6K8aMN 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=56015 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 56015 /var/tmp/spdk-raid.sock 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 56015 ']' 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.464 06:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.722 [2024-07-23 06:26:40.986695] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:28.722 [2024-07-23 06:26:40.986879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:29.288 EAL: TSC is not safe to use in SMP mode 00:13:29.288 EAL: TSC is not invariant 00:13:29.288 [2024-07-23 06:26:41.544399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.288 [2024-07-23 06:26:41.631141] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:29.288 [2024-07-23 06:26:41.633239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.288 [2024-07-23 06:26:41.634021] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.288 [2024-07-23 06:26:41.634034] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.546 06:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.546 06:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:29.546 06:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:29.546 06:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.805 BaseBdev1_malloc 00:13:29.805 06:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:30.063 true 00:13:30.063 06:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:30.322 [2024-07-23 06:26:42.818275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:30.322 [2024-07-23 06:26:42.818340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.322 [2024-07-23 06:26:42.818367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3aa34780 00:13:30.322 [2024-07-23 06:26:42.818376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.322 [2024-07-23 06:26:42.819043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.322 [2024-07-23 06:26:42.819072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.322 BaseBdev1 00:13:30.322 06:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:30.322 06:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:30.580 BaseBdev2_malloc 00:13:30.580 06:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:30.850 true 00:13:31.121 06:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:31.121 [2024-07-23 06:26:43.642289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:31.121 [2024-07-23 06:26:43.642346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.121 [2024-07-23 06:26:43.642373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3aa34c80 00:13:31.121 [2024-07-23 06:26:43.642382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.121 [2024-07-23 06:26:43.643056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.121 [2024-07-23 06:26:43.643083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.379 BaseBdev2 00:13:31.379 06:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:31.379 06:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:31.670 BaseBdev3_malloc 00:13:31.670 06:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:31.928 true 00:13:31.928 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:32.187 [2024-07-23 06:26:44.490303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:32.187 [2024-07-23 06:26:44.490361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.187 [2024-07-23 06:26:44.490387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3aa35180 00:13:32.187 [2024-07-23 06:26:44.490396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.187 [2024-07-23 06:26:44.491054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.187 [2024-07-23 06:26:44.491079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:32.187 BaseBdev3 00:13:32.187 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:32.445 [2024-07-23 06:26:44.774330] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.445 [2024-07-23 06:26:44.774937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.445 [2024-07-23 06:26:44.774963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.445 [2024-07-23 06:26:44.775023] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e3aa35400 00:13:32.445 [2024-07-23 06:26:44.775030] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:32.445 [2024-07-23 06:26:44.775069] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e3aaa0e20 00:13:32.445 [2024-07-23 06:26:44.775141] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e3aa35400 00:13:32.445 [2024-07-23 06:26:44.775146] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e3aa35400 00:13:32.445 [2024-07-23 06:26:44.775174] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.445 06:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.703 06:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.703 "name": "raid_bdev1", 00:13:32.703 "uuid": "883110d9-48bc-11ef-a06c-59ddad71024c", 00:13:32.703 "strip_size_kb": 64, 00:13:32.703 "state": "online", 00:13:32.703 "raid_level": "concat", 00:13:32.703 "superblock": true, 00:13:32.703 "num_base_bdevs": 3, 00:13:32.703 "num_base_bdevs_discovered": 3, 00:13:32.703 "num_base_bdevs_operational": 3, 00:13:32.703 "base_bdevs_list": [ 00:13:32.703 { 00:13:32.703 "name": "BaseBdev1", 00:13:32.703 "uuid": "07cca37a-c34d-ff5a-a463-cfca7fe15ca1", 00:13:32.703 "is_configured": true, 00:13:32.703 "data_offset": 2048, 00:13:32.703 "data_size": 63488 00:13:32.703 }, 00:13:32.703 { 00:13:32.703 "name": "BaseBdev2", 00:13:32.703 "uuid": "bb267564-7057-ea57-a181-6497d359e80a", 00:13:32.703 "is_configured": true, 00:13:32.703 "data_offset": 2048, 00:13:32.703 "data_size": 63488 00:13:32.703 }, 00:13:32.703 { 00:13:32.703 "name": "BaseBdev3", 00:13:32.703 "uuid": "b1bcf1ed-23d8-fb59-b6fc-7fe07d5cf47b", 00:13:32.703 "is_configured": true, 00:13:32.703 "data_offset": 2048, 00:13:32.703 "data_size": 63488 00:13:32.703 } 00:13:32.703 ] 00:13:32.703 }' 00:13:32.703 06:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.703 06:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.961 06:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:32.961 06:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:33.219 [2024-07-23 06:26:45.506526] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e3aaa0ec0 00:13:34.153 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.154 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.412 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.412 "name": "raid_bdev1", 00:13:34.412 "uuid": "883110d9-48bc-11ef-a06c-59ddad71024c", 00:13:34.412 "strip_size_kb": 64, 00:13:34.412 "state": "online", 00:13:34.412 "raid_level": "concat", 00:13:34.412 "superblock": true, 00:13:34.412 "num_base_bdevs": 3, 00:13:34.412 "num_base_bdevs_discovered": 3, 00:13:34.412 "num_base_bdevs_operational": 3, 00:13:34.412 "base_bdevs_list": [ 00:13:34.412 { 00:13:34.412 "name": "BaseBdev1", 00:13:34.412 "uuid": "07cca37a-c34d-ff5a-a463-cfca7fe15ca1", 00:13:34.412 "is_configured": true, 00:13:34.412 "data_offset": 2048, 00:13:34.412 "data_size": 63488 00:13:34.412 }, 00:13:34.412 { 00:13:34.412 "name": "BaseBdev2", 00:13:34.412 "uuid": "bb267564-7057-ea57-a181-6497d359e80a", 00:13:34.412 "is_configured": true, 00:13:34.412 "data_offset": 2048, 00:13:34.412 "data_size": 63488 00:13:34.412 }, 00:13:34.412 { 00:13:34.412 "name": "BaseBdev3", 00:13:34.412 "uuid": "b1bcf1ed-23d8-fb59-b6fc-7fe07d5cf47b", 00:13:34.412 "is_configured": true, 00:13:34.412 "data_offset": 2048, 00:13:34.412 "data_size": 63488 00:13:34.412 } 00:13:34.412 ] 00:13:34.412 }' 00:13:34.412 06:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.412 06:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.977 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:34.977 [2024-07-23 06:26:47.464076] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.978 [2024-07-23 06:26:47.464107] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.978 [2024-07-23 06:26:47.464441] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.978 [2024-07-23 06:26:47.464452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.978 [2024-07-23 06:26:47.464459] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.978 [2024-07-23 06:26:47.464463] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e3aa35400 name raid_bdev1, state offline 00:13:34.978 0 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 56015 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 56015 ']' 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 56015 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 56015 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:34.978 killing process with pid 56015 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56015' 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 56015 00:13:34.978 [2024-07-23 06:26:47.490632] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:34.978 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 56015 00:13:35.236 [2024-07-23 06:26:47.507872] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jtPy6K8aMN 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:13:35.236 00:13:35.236 real 0m6.717s 00:13:35.236 user 0m10.652s 00:13:35.236 sys 0m1.066s 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.236 ************************************ 00:13:35.236 06:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.236 END TEST raid_write_error_test 00:13:35.236 ************************************ 00:13:35.236 06:26:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:35.236 06:26:47 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:35.236 06:26:47 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:35.236 06:26:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:35.236 06:26:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.236 06:26:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.236 ************************************ 00:13:35.236 START TEST raid_state_function_test 00:13:35.236 ************************************ 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56144 00:13:35.236 Process raid pid: 56144 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56144' 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56144 /var/tmp/spdk-raid.sock 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:35.236 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 56144 ']' 00:13:35.237 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:35.237 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:35.237 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:35.237 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.237 06:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.237 [2024-07-23 06:26:47.743415] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:35.237 [2024-07-23 06:26:47.743653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:35.803 EAL: TSC is not safe to use in SMP mode 00:13:35.803 EAL: TSC is not invariant 00:13:35.803 [2024-07-23 06:26:48.263005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.061 [2024-07-23 06:26:48.361688] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:36.061 [2024-07-23 06:26:48.364215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.061 [2024-07-23 06:26:48.365214] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.061 [2024-07-23 06:26:48.365233] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.319 06:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.319 06:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:36.319 06:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:36.578 [2024-07-23 06:26:49.026921] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:36.578 [2024-07-23 06:26:49.026981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:36.578 [2024-07-23 06:26:49.026986] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.578 [2024-07-23 06:26:49.026996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.578 [2024-07-23 06:26:49.026999] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:36.578 [2024-07-23 06:26:49.027015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.578 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.836 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.836 "name": "Existed_Raid", 00:13:36.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.836 "strip_size_kb": 0, 00:13:36.836 "state": "configuring", 00:13:36.836 "raid_level": "raid1", 00:13:36.836 "superblock": false, 00:13:36.836 "num_base_bdevs": 3, 00:13:36.836 "num_base_bdevs_discovered": 0, 00:13:36.836 "num_base_bdevs_operational": 3, 00:13:36.836 "base_bdevs_list": [ 00:13:36.836 { 00:13:36.836 "name": "BaseBdev1", 00:13:36.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.836 "is_configured": false, 00:13:36.836 "data_offset": 0, 00:13:36.836 "data_size": 0 00:13:36.836 }, 00:13:36.836 { 00:13:36.836 "name": "BaseBdev2", 00:13:36.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.836 "is_configured": false, 00:13:36.836 "data_offset": 0, 00:13:36.836 "data_size": 0 00:13:36.836 }, 00:13:36.836 { 00:13:36.836 "name": "BaseBdev3", 00:13:36.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.836 "is_configured": false, 00:13:36.836 "data_offset": 0, 00:13:36.836 "data_size": 0 00:13:36.836 } 00:13:36.836 ] 00:13:36.836 }' 00:13:36.836 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.836 06:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.401 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:37.659 [2024-07-23 06:26:49.942935] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.659 [2024-07-23 06:26:49.942970] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x84fa7c34500 name Existed_Raid, state configuring 00:13:37.659 06:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:37.937 [2024-07-23 06:26:50.234947] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:37.937 [2024-07-23 06:26:50.235001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:37.937 [2024-07-23 06:26:50.235006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.937 [2024-07-23 06:26:50.235015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.937 [2024-07-23 06:26:50.235019] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:37.937 [2024-07-23 06:26:50.235026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:37.937 06:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.195 [2024-07-23 06:26:50.479937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.195 BaseBdev1 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:38.195 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.454 06:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:38.712 [ 00:13:38.712 { 00:13:38.712 "name": "BaseBdev1", 00:13:38.712 "aliases": [ 00:13:38.712 "8b978659-48bc-11ef-a06c-59ddad71024c" 00:13:38.712 ], 00:13:38.712 "product_name": "Malloc disk", 00:13:38.712 "block_size": 512, 00:13:38.712 "num_blocks": 65536, 00:13:38.712 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:38.712 "assigned_rate_limits": { 00:13:38.712 "rw_ios_per_sec": 0, 00:13:38.712 "rw_mbytes_per_sec": 0, 00:13:38.712 "r_mbytes_per_sec": 0, 00:13:38.712 "w_mbytes_per_sec": 0 00:13:38.712 }, 00:13:38.712 "claimed": true, 00:13:38.712 "claim_type": "exclusive_write", 00:13:38.712 "zoned": false, 00:13:38.712 "supported_io_types": { 00:13:38.712 "read": true, 00:13:38.712 "write": true, 00:13:38.712 "unmap": true, 00:13:38.712 "flush": true, 00:13:38.712 "reset": true, 00:13:38.712 "nvme_admin": false, 00:13:38.712 "nvme_io": false, 00:13:38.712 "nvme_io_md": false, 00:13:38.712 "write_zeroes": true, 00:13:38.712 "zcopy": true, 00:13:38.712 "get_zone_info": false, 00:13:38.712 "zone_management": false, 00:13:38.712 "zone_append": false, 00:13:38.712 "compare": false, 00:13:38.712 "compare_and_write": false, 00:13:38.712 "abort": true, 00:13:38.712 "seek_hole": false, 00:13:38.712 "seek_data": false, 00:13:38.712 "copy": true, 00:13:38.712 "nvme_iov_md": false 00:13:38.712 }, 00:13:38.712 "memory_domains": [ 00:13:38.712 { 00:13:38.712 "dma_device_id": "system", 00:13:38.712 "dma_device_type": 1 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.712 "dma_device_type": 2 00:13:38.712 } 00:13:38.712 ], 00:13:38.712 "driver_specific": {} 00:13:38.712 } 00:13:38.712 ] 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.712 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.970 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.970 "name": "Existed_Raid", 00:13:38.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.970 "strip_size_kb": 0, 00:13:38.970 "state": "configuring", 00:13:38.970 "raid_level": "raid1", 00:13:38.970 "superblock": false, 00:13:38.970 "num_base_bdevs": 3, 00:13:38.970 "num_base_bdevs_discovered": 1, 00:13:38.970 "num_base_bdevs_operational": 3, 00:13:38.970 "base_bdevs_list": [ 00:13:38.970 { 00:13:38.970 "name": "BaseBdev1", 00:13:38.970 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:38.970 "is_configured": true, 00:13:38.970 "data_offset": 0, 00:13:38.970 "data_size": 65536 00:13:38.970 }, 00:13:38.970 { 00:13:38.970 "name": "BaseBdev2", 00:13:38.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.970 "is_configured": false, 00:13:38.970 "data_offset": 0, 00:13:38.970 "data_size": 0 00:13:38.970 }, 00:13:38.970 { 00:13:38.970 "name": "BaseBdev3", 00:13:38.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.970 "is_configured": false, 00:13:38.970 "data_offset": 0, 00:13:38.970 "data_size": 0 00:13:38.970 } 00:13:38.970 ] 00:13:38.970 }' 00:13:38.970 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.970 06:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.228 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:39.486 [2024-07-23 06:26:51.906976] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.486 [2024-07-23 06:26:51.907012] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x84fa7c34500 name Existed_Raid, state configuring 00:13:39.486 06:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:39.745 [2024-07-23 06:26:52.138999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.745 [2024-07-23 06:26:52.139794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.745 [2024-07-23 06:26:52.139832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.745 [2024-07-23 06:26:52.139837] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.745 [2024-07-23 06:26:52.139846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.745 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.004 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.004 "name": "Existed_Raid", 00:13:40.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.004 "strip_size_kb": 0, 00:13:40.004 "state": "configuring", 00:13:40.004 "raid_level": "raid1", 00:13:40.004 "superblock": false, 00:13:40.004 "num_base_bdevs": 3, 00:13:40.004 "num_base_bdevs_discovered": 1, 00:13:40.004 "num_base_bdevs_operational": 3, 00:13:40.004 "base_bdevs_list": [ 00:13:40.004 { 00:13:40.004 "name": "BaseBdev1", 00:13:40.004 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:40.004 "is_configured": true, 00:13:40.004 "data_offset": 0, 00:13:40.004 "data_size": 65536 00:13:40.004 }, 00:13:40.004 { 00:13:40.004 "name": "BaseBdev2", 00:13:40.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.004 "is_configured": false, 00:13:40.004 "data_offset": 0, 00:13:40.004 "data_size": 0 00:13:40.004 }, 00:13:40.004 { 00:13:40.004 "name": "BaseBdev3", 00:13:40.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.004 "is_configured": false, 00:13:40.004 "data_offset": 0, 00:13:40.004 "data_size": 0 00:13:40.004 } 00:13:40.004 ] 00:13:40.004 }' 00:13:40.004 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.004 06:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.263 06:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.521 [2024-07-23 06:26:53.003151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.521 BaseBdev2 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:40.521 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:40.780 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.038 [ 00:13:41.038 { 00:13:41.038 "name": "BaseBdev2", 00:13:41.038 "aliases": [ 00:13:41.038 "8d18aa66-48bc-11ef-a06c-59ddad71024c" 00:13:41.038 ], 00:13:41.038 "product_name": "Malloc disk", 00:13:41.038 "block_size": 512, 00:13:41.038 "num_blocks": 65536, 00:13:41.038 "uuid": "8d18aa66-48bc-11ef-a06c-59ddad71024c", 00:13:41.038 "assigned_rate_limits": { 00:13:41.038 "rw_ios_per_sec": 0, 00:13:41.038 "rw_mbytes_per_sec": 0, 00:13:41.038 "r_mbytes_per_sec": 0, 00:13:41.038 "w_mbytes_per_sec": 0 00:13:41.038 }, 00:13:41.038 "claimed": true, 00:13:41.038 "claim_type": "exclusive_write", 00:13:41.038 "zoned": false, 00:13:41.038 "supported_io_types": { 00:13:41.038 "read": true, 00:13:41.038 "write": true, 00:13:41.038 "unmap": true, 00:13:41.038 "flush": true, 00:13:41.038 "reset": true, 00:13:41.038 "nvme_admin": false, 00:13:41.038 "nvme_io": false, 00:13:41.038 "nvme_io_md": false, 00:13:41.038 "write_zeroes": true, 00:13:41.038 "zcopy": true, 00:13:41.038 "get_zone_info": false, 00:13:41.038 "zone_management": false, 00:13:41.038 "zone_append": false, 00:13:41.038 "compare": false, 00:13:41.038 "compare_and_write": false, 00:13:41.038 "abort": true, 00:13:41.038 "seek_hole": false, 00:13:41.038 "seek_data": false, 00:13:41.038 "copy": true, 00:13:41.038 "nvme_iov_md": false 00:13:41.038 }, 00:13:41.038 "memory_domains": [ 00:13:41.038 { 00:13:41.038 "dma_device_id": "system", 00:13:41.038 "dma_device_type": 1 00:13:41.038 }, 00:13:41.038 { 00:13:41.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.038 "dma_device_type": 2 00:13:41.038 } 00:13:41.038 ], 00:13:41.038 "driver_specific": {} 00:13:41.038 } 00:13:41.038 ] 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.038 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.606 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.606 "name": "Existed_Raid", 00:13:41.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.606 "strip_size_kb": 0, 00:13:41.606 "state": "configuring", 00:13:41.606 "raid_level": "raid1", 00:13:41.606 "superblock": false, 00:13:41.606 "num_base_bdevs": 3, 00:13:41.606 "num_base_bdevs_discovered": 2, 00:13:41.606 "num_base_bdevs_operational": 3, 00:13:41.606 "base_bdevs_list": [ 00:13:41.606 { 00:13:41.606 "name": "BaseBdev1", 00:13:41.606 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:41.606 "is_configured": true, 00:13:41.606 "data_offset": 0, 00:13:41.606 "data_size": 65536 00:13:41.606 }, 00:13:41.606 { 00:13:41.606 "name": "BaseBdev2", 00:13:41.606 "uuid": "8d18aa66-48bc-11ef-a06c-59ddad71024c", 00:13:41.606 "is_configured": true, 00:13:41.606 "data_offset": 0, 00:13:41.606 "data_size": 65536 00:13:41.606 }, 00:13:41.606 { 00:13:41.606 "name": "BaseBdev3", 00:13:41.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.606 "is_configured": false, 00:13:41.606 "data_offset": 0, 00:13:41.606 "data_size": 0 00:13:41.606 } 00:13:41.606 ] 00:13:41.606 }' 00:13:41.606 06:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.606 06:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.865 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.123 [2024-07-23 06:26:54.435403] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.123 [2024-07-23 06:26:54.435433] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x84fa7c34a00 00:13:42.123 [2024-07-23 06:26:54.435438] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:42.123 [2024-07-23 06:26:54.435460] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x84fa7c97e20 00:13:42.123 [2024-07-23 06:26:54.435555] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x84fa7c34a00 00:13:42.123 [2024-07-23 06:26:54.435559] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x84fa7c34a00 00:13:42.123 [2024-07-23 06:26:54.435601] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.123 BaseBdev3 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.123 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:42.382 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.641 [ 00:13:42.641 { 00:13:42.641 "name": "BaseBdev3", 00:13:42.641 "aliases": [ 00:13:42.641 "8df33641-48bc-11ef-a06c-59ddad71024c" 00:13:42.641 ], 00:13:42.641 "product_name": "Malloc disk", 00:13:42.641 "block_size": 512, 00:13:42.641 "num_blocks": 65536, 00:13:42.641 "uuid": "8df33641-48bc-11ef-a06c-59ddad71024c", 00:13:42.641 "assigned_rate_limits": { 00:13:42.641 "rw_ios_per_sec": 0, 00:13:42.641 "rw_mbytes_per_sec": 0, 00:13:42.641 "r_mbytes_per_sec": 0, 00:13:42.641 "w_mbytes_per_sec": 0 00:13:42.641 }, 00:13:42.641 "claimed": true, 00:13:42.641 "claim_type": "exclusive_write", 00:13:42.641 "zoned": false, 00:13:42.641 "supported_io_types": { 00:13:42.641 "read": true, 00:13:42.641 "write": true, 00:13:42.641 "unmap": true, 00:13:42.641 "flush": true, 00:13:42.641 "reset": true, 00:13:42.641 "nvme_admin": false, 00:13:42.641 "nvme_io": false, 00:13:42.641 "nvme_io_md": false, 00:13:42.641 "write_zeroes": true, 00:13:42.641 "zcopy": true, 00:13:42.641 "get_zone_info": false, 00:13:42.641 "zone_management": false, 00:13:42.641 "zone_append": false, 00:13:42.641 "compare": false, 00:13:42.641 "compare_and_write": false, 00:13:42.641 "abort": true, 00:13:42.641 "seek_hole": false, 00:13:42.641 "seek_data": false, 00:13:42.641 "copy": true, 00:13:42.641 "nvme_iov_md": false 00:13:42.641 }, 00:13:42.641 "memory_domains": [ 00:13:42.641 { 00:13:42.641 "dma_device_id": "system", 00:13:42.641 "dma_device_type": 1 00:13:42.641 }, 00:13:42.641 { 00:13:42.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.641 "dma_device_type": 2 00:13:42.641 } 00:13:42.641 ], 00:13:42.641 "driver_specific": {} 00:13:42.641 } 00:13:42.641 ] 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.641 06:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.641 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.641 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.899 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.899 "name": "Existed_Raid", 00:13:42.899 "uuid": "8df33c7f-48bc-11ef-a06c-59ddad71024c", 00:13:42.899 "strip_size_kb": 0, 00:13:42.899 "state": "online", 00:13:42.899 "raid_level": "raid1", 00:13:42.899 "superblock": false, 00:13:42.899 "num_base_bdevs": 3, 00:13:42.899 "num_base_bdevs_discovered": 3, 00:13:42.899 "num_base_bdevs_operational": 3, 00:13:42.899 "base_bdevs_list": [ 00:13:42.899 { 00:13:42.899 "name": "BaseBdev1", 00:13:42.899 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:42.899 "is_configured": true, 00:13:42.899 "data_offset": 0, 00:13:42.900 "data_size": 65536 00:13:42.900 }, 00:13:42.900 { 00:13:42.900 "name": "BaseBdev2", 00:13:42.900 "uuid": "8d18aa66-48bc-11ef-a06c-59ddad71024c", 00:13:42.900 "is_configured": true, 00:13:42.900 "data_offset": 0, 00:13:42.900 "data_size": 65536 00:13:42.900 }, 00:13:42.900 { 00:13:42.900 "name": "BaseBdev3", 00:13:42.900 "uuid": "8df33641-48bc-11ef-a06c-59ddad71024c", 00:13:42.900 "is_configured": true, 00:13:42.900 "data_offset": 0, 00:13:42.900 "data_size": 65536 00:13:42.900 } 00:13:42.900 ] 00:13:42.900 }' 00:13:42.900 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.900 06:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:43.158 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:43.417 [2024-07-23 06:26:55.799479] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.417 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:43.417 "name": "Existed_Raid", 00:13:43.417 "aliases": [ 00:13:43.417 "8df33c7f-48bc-11ef-a06c-59ddad71024c" 00:13:43.417 ], 00:13:43.417 "product_name": "Raid Volume", 00:13:43.417 "block_size": 512, 00:13:43.417 "num_blocks": 65536, 00:13:43.417 "uuid": "8df33c7f-48bc-11ef-a06c-59ddad71024c", 00:13:43.417 "assigned_rate_limits": { 00:13:43.417 "rw_ios_per_sec": 0, 00:13:43.417 "rw_mbytes_per_sec": 0, 00:13:43.417 "r_mbytes_per_sec": 0, 00:13:43.417 "w_mbytes_per_sec": 0 00:13:43.417 }, 00:13:43.417 "claimed": false, 00:13:43.417 "zoned": false, 00:13:43.417 "supported_io_types": { 00:13:43.417 "read": true, 00:13:43.417 "write": true, 00:13:43.417 "unmap": false, 00:13:43.417 "flush": false, 00:13:43.417 "reset": true, 00:13:43.417 "nvme_admin": false, 00:13:43.417 "nvme_io": false, 00:13:43.417 "nvme_io_md": false, 00:13:43.417 "write_zeroes": true, 00:13:43.417 "zcopy": false, 00:13:43.417 "get_zone_info": false, 00:13:43.417 "zone_management": false, 00:13:43.417 "zone_append": false, 00:13:43.417 "compare": false, 00:13:43.417 "compare_and_write": false, 00:13:43.417 "abort": false, 00:13:43.417 "seek_hole": false, 00:13:43.417 "seek_data": false, 00:13:43.417 "copy": false, 00:13:43.417 "nvme_iov_md": false 00:13:43.417 }, 00:13:43.418 "memory_domains": [ 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 } 00:13:43.418 ], 00:13:43.418 "driver_specific": { 00:13:43.418 "raid": { 00:13:43.418 "uuid": "8df33c7f-48bc-11ef-a06c-59ddad71024c", 00:13:43.418 "strip_size_kb": 0, 00:13:43.418 "state": "online", 00:13:43.418 "raid_level": "raid1", 00:13:43.418 "superblock": false, 00:13:43.418 "num_base_bdevs": 3, 00:13:43.418 "num_base_bdevs_discovered": 3, 00:13:43.418 "num_base_bdevs_operational": 3, 00:13:43.418 "base_bdevs_list": [ 00:13:43.418 { 00:13:43.418 "name": "BaseBdev1", 00:13:43.418 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "name": "BaseBdev2", 00:13:43.418 "uuid": "8d18aa66-48bc-11ef-a06c-59ddad71024c", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "name": "BaseBdev3", 00:13:43.418 "uuid": "8df33641-48bc-11ef-a06c-59ddad71024c", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 } 00:13:43.418 ] 00:13:43.418 } 00:13:43.418 } 00:13:43.418 }' 00:13:43.418 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.418 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:43.418 BaseBdev2 00:13:43.418 BaseBdev3' 00:13:43.418 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.418 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:43.418 06:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.677 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.677 "name": "BaseBdev1", 00:13:43.677 "aliases": [ 00:13:43.677 "8b978659-48bc-11ef-a06c-59ddad71024c" 00:13:43.677 ], 00:13:43.677 "product_name": "Malloc disk", 00:13:43.677 "block_size": 512, 00:13:43.677 "num_blocks": 65536, 00:13:43.677 "uuid": "8b978659-48bc-11ef-a06c-59ddad71024c", 00:13:43.677 "assigned_rate_limits": { 00:13:43.677 "rw_ios_per_sec": 0, 00:13:43.677 "rw_mbytes_per_sec": 0, 00:13:43.677 "r_mbytes_per_sec": 0, 00:13:43.677 "w_mbytes_per_sec": 0 00:13:43.677 }, 00:13:43.677 "claimed": true, 00:13:43.677 "claim_type": "exclusive_write", 00:13:43.677 "zoned": false, 00:13:43.677 "supported_io_types": { 00:13:43.677 "read": true, 00:13:43.677 "write": true, 00:13:43.677 "unmap": true, 00:13:43.677 "flush": true, 00:13:43.677 "reset": true, 00:13:43.677 "nvme_admin": false, 00:13:43.677 "nvme_io": false, 00:13:43.677 "nvme_io_md": false, 00:13:43.677 "write_zeroes": true, 00:13:43.677 "zcopy": true, 00:13:43.677 "get_zone_info": false, 00:13:43.677 "zone_management": false, 00:13:43.677 "zone_append": false, 00:13:43.678 "compare": false, 00:13:43.678 "compare_and_write": false, 00:13:43.678 "abort": true, 00:13:43.678 "seek_hole": false, 00:13:43.678 "seek_data": false, 00:13:43.678 "copy": true, 00:13:43.678 "nvme_iov_md": false 00:13:43.678 }, 00:13:43.678 "memory_domains": [ 00:13:43.678 { 00:13:43.678 "dma_device_id": "system", 00:13:43.678 "dma_device_type": 1 00:13:43.678 }, 00:13:43.678 { 00:13:43.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.678 "dma_device_type": 2 00:13:43.678 } 00:13:43.678 ], 00:13:43.678 "driver_specific": {} 00:13:43.678 }' 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:43.678 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.937 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.937 "name": "BaseBdev2", 00:13:43.937 "aliases": [ 00:13:43.937 "8d18aa66-48bc-11ef-a06c-59ddad71024c" 00:13:43.937 ], 00:13:43.937 "product_name": "Malloc disk", 00:13:43.937 "block_size": 512, 00:13:43.937 "num_blocks": 65536, 00:13:43.937 "uuid": "8d18aa66-48bc-11ef-a06c-59ddad71024c", 00:13:43.937 "assigned_rate_limits": { 00:13:43.937 "rw_ios_per_sec": 0, 00:13:43.937 "rw_mbytes_per_sec": 0, 00:13:43.937 "r_mbytes_per_sec": 0, 00:13:43.937 "w_mbytes_per_sec": 0 00:13:43.937 }, 00:13:43.937 "claimed": true, 00:13:43.937 "claim_type": "exclusive_write", 00:13:43.937 "zoned": false, 00:13:43.937 "supported_io_types": { 00:13:43.937 "read": true, 00:13:43.937 "write": true, 00:13:43.937 "unmap": true, 00:13:43.937 "flush": true, 00:13:43.937 "reset": true, 00:13:43.938 "nvme_admin": false, 00:13:43.938 "nvme_io": false, 00:13:43.938 "nvme_io_md": false, 00:13:43.938 "write_zeroes": true, 00:13:43.938 "zcopy": true, 00:13:43.938 "get_zone_info": false, 00:13:43.938 "zone_management": false, 00:13:43.938 "zone_append": false, 00:13:43.938 "compare": false, 00:13:43.938 "compare_and_write": false, 00:13:43.938 "abort": true, 00:13:43.938 "seek_hole": false, 00:13:43.938 "seek_data": false, 00:13:43.938 "copy": true, 00:13:43.938 "nvme_iov_md": false 00:13:43.938 }, 00:13:43.938 "memory_domains": [ 00:13:43.938 { 00:13:43.938 "dma_device_id": "system", 00:13:43.938 "dma_device_type": 1 00:13:43.938 }, 00:13:43.938 { 00:13:43.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.938 "dma_device_type": 2 00:13:43.938 } 00:13:43.938 ], 00:13:43.938 "driver_specific": {} 00:13:43.938 }' 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:43.938 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.505 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.505 "name": "BaseBdev3", 00:13:44.505 "aliases": [ 00:13:44.505 "8df33641-48bc-11ef-a06c-59ddad71024c" 00:13:44.505 ], 00:13:44.505 "product_name": "Malloc disk", 00:13:44.505 "block_size": 512, 00:13:44.505 "num_blocks": 65536, 00:13:44.505 "uuid": "8df33641-48bc-11ef-a06c-59ddad71024c", 00:13:44.505 "assigned_rate_limits": { 00:13:44.505 "rw_ios_per_sec": 0, 00:13:44.505 "rw_mbytes_per_sec": 0, 00:13:44.505 "r_mbytes_per_sec": 0, 00:13:44.505 "w_mbytes_per_sec": 0 00:13:44.505 }, 00:13:44.505 "claimed": true, 00:13:44.505 "claim_type": "exclusive_write", 00:13:44.505 "zoned": false, 00:13:44.505 "supported_io_types": { 00:13:44.505 "read": true, 00:13:44.505 "write": true, 00:13:44.505 "unmap": true, 00:13:44.505 "flush": true, 00:13:44.505 "reset": true, 00:13:44.505 "nvme_admin": false, 00:13:44.505 "nvme_io": false, 00:13:44.505 "nvme_io_md": false, 00:13:44.505 "write_zeroes": true, 00:13:44.505 "zcopy": true, 00:13:44.505 "get_zone_info": false, 00:13:44.505 "zone_management": false, 00:13:44.505 "zone_append": false, 00:13:44.505 "compare": false, 00:13:44.506 "compare_and_write": false, 00:13:44.506 "abort": true, 00:13:44.506 "seek_hole": false, 00:13:44.506 "seek_data": false, 00:13:44.506 "copy": true, 00:13:44.506 "nvme_iov_md": false 00:13:44.506 }, 00:13:44.506 "memory_domains": [ 00:13:44.506 { 00:13:44.506 "dma_device_id": "system", 00:13:44.506 "dma_device_type": 1 00:13:44.506 }, 00:13:44.506 { 00:13:44.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.506 "dma_device_type": 2 00:13:44.506 } 00:13:44.506 ], 00:13:44.506 "driver_specific": {} 00:13:44.506 }' 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.506 06:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:44.765 [2024-07-23 06:26:57.087490] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.765 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.023 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.023 "name": "Existed_Raid", 00:13:45.023 "uuid": "8df33c7f-48bc-11ef-a06c-59ddad71024c", 00:13:45.023 "strip_size_kb": 0, 00:13:45.023 "state": "online", 00:13:45.023 "raid_level": "raid1", 00:13:45.023 "superblock": false, 00:13:45.023 "num_base_bdevs": 3, 00:13:45.023 "num_base_bdevs_discovered": 2, 00:13:45.023 "num_base_bdevs_operational": 2, 00:13:45.023 "base_bdevs_list": [ 00:13:45.023 { 00:13:45.023 "name": null, 00:13:45.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.023 "is_configured": false, 00:13:45.023 "data_offset": 0, 00:13:45.023 "data_size": 65536 00:13:45.023 }, 00:13:45.023 { 00:13:45.023 "name": "BaseBdev2", 00:13:45.023 "uuid": "8d18aa66-48bc-11ef-a06c-59ddad71024c", 00:13:45.023 "is_configured": true, 00:13:45.023 "data_offset": 0, 00:13:45.023 "data_size": 65536 00:13:45.023 }, 00:13:45.023 { 00:13:45.023 "name": "BaseBdev3", 00:13:45.023 "uuid": "8df33641-48bc-11ef-a06c-59ddad71024c", 00:13:45.023 "is_configured": true, 00:13:45.023 "data_offset": 0, 00:13:45.023 "data_size": 65536 00:13:45.023 } 00:13:45.023 ] 00:13:45.023 }' 00:13:45.023 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.023 06:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.280 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:45.280 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:45.280 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.280 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:45.539 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:45.539 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:45.539 06:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:45.796 [2024-07-23 06:26:58.229339] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.796 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:45.796 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:45.796 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.796 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:46.055 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:46.055 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:46.055 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:46.313 [2024-07-23 06:26:58.755066] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.313 [2024-07-23 06:26:58.755110] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.313 [2024-07-23 06:26:58.760943] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.313 [2024-07-23 06:26:58.760974] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.313 [2024-07-23 06:26:58.760979] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x84fa7c34a00 name Existed_Raid, state offline 00:13:46.313 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:46.313 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:46.313 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.313 06:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:46.572 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:46.572 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:46.572 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:46.572 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:46.572 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:46.572 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:46.838 BaseBdev2 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.838 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.122 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.380 [ 00:13:47.380 { 00:13:47.380 "name": "BaseBdev2", 00:13:47.380 "aliases": [ 00:13:47.380 "90dc18a7-48bc-11ef-a06c-59ddad71024c" 00:13:47.380 ], 00:13:47.380 "product_name": "Malloc disk", 00:13:47.380 "block_size": 512, 00:13:47.380 "num_blocks": 65536, 00:13:47.380 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:47.380 "assigned_rate_limits": { 00:13:47.380 "rw_ios_per_sec": 0, 00:13:47.380 "rw_mbytes_per_sec": 0, 00:13:47.380 "r_mbytes_per_sec": 0, 00:13:47.380 "w_mbytes_per_sec": 0 00:13:47.380 }, 00:13:47.380 "claimed": false, 00:13:47.380 "zoned": false, 00:13:47.380 "supported_io_types": { 00:13:47.380 "read": true, 00:13:47.380 "write": true, 00:13:47.380 "unmap": true, 00:13:47.380 "flush": true, 00:13:47.380 "reset": true, 00:13:47.380 "nvme_admin": false, 00:13:47.380 "nvme_io": false, 00:13:47.380 "nvme_io_md": false, 00:13:47.380 "write_zeroes": true, 00:13:47.380 "zcopy": true, 00:13:47.380 "get_zone_info": false, 00:13:47.380 "zone_management": false, 00:13:47.380 "zone_append": false, 00:13:47.380 "compare": false, 00:13:47.380 "compare_and_write": false, 00:13:47.380 "abort": true, 00:13:47.380 "seek_hole": false, 00:13:47.380 "seek_data": false, 00:13:47.380 "copy": true, 00:13:47.380 "nvme_iov_md": false 00:13:47.380 }, 00:13:47.380 "memory_domains": [ 00:13:47.380 { 00:13:47.380 "dma_device_id": "system", 00:13:47.380 "dma_device_type": 1 00:13:47.380 }, 00:13:47.380 { 00:13:47.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.380 "dma_device_type": 2 00:13:47.380 } 00:13:47.380 ], 00:13:47.380 "driver_specific": {} 00:13:47.380 } 00:13:47.380 ] 00:13:47.380 06:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:47.380 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:47.380 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:47.380 06:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:47.638 BaseBdev3 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.638 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.914 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.171 [ 00:13:48.171 { 00:13:48.171 "name": "BaseBdev3", 00:13:48.171 "aliases": [ 00:13:48.171 "91562b17-48bc-11ef-a06c-59ddad71024c" 00:13:48.171 ], 00:13:48.171 "product_name": "Malloc disk", 00:13:48.171 "block_size": 512, 00:13:48.171 "num_blocks": 65536, 00:13:48.171 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:48.171 "assigned_rate_limits": { 00:13:48.171 "rw_ios_per_sec": 0, 00:13:48.171 "rw_mbytes_per_sec": 0, 00:13:48.171 "r_mbytes_per_sec": 0, 00:13:48.171 "w_mbytes_per_sec": 0 00:13:48.171 }, 00:13:48.171 "claimed": false, 00:13:48.171 "zoned": false, 00:13:48.171 "supported_io_types": { 00:13:48.171 "read": true, 00:13:48.171 "write": true, 00:13:48.171 "unmap": true, 00:13:48.171 "flush": true, 00:13:48.171 "reset": true, 00:13:48.171 "nvme_admin": false, 00:13:48.171 "nvme_io": false, 00:13:48.171 "nvme_io_md": false, 00:13:48.171 "write_zeroes": true, 00:13:48.171 "zcopy": true, 00:13:48.171 "get_zone_info": false, 00:13:48.171 "zone_management": false, 00:13:48.171 "zone_append": false, 00:13:48.171 "compare": false, 00:13:48.171 "compare_and_write": false, 00:13:48.171 "abort": true, 00:13:48.171 "seek_hole": false, 00:13:48.171 "seek_data": false, 00:13:48.171 "copy": true, 00:13:48.171 "nvme_iov_md": false 00:13:48.171 }, 00:13:48.171 "memory_domains": [ 00:13:48.171 { 00:13:48.171 "dma_device_id": "system", 00:13:48.171 "dma_device_type": 1 00:13:48.171 }, 00:13:48.171 { 00:13:48.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.171 "dma_device_type": 2 00:13:48.171 } 00:13:48.171 ], 00:13:48.171 "driver_specific": {} 00:13:48.171 } 00:13:48.171 ] 00:13:48.171 06:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:48.171 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:48.171 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:48.171 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:48.735 [2024-07-23 06:27:00.952971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.735 [2024-07-23 06:27:00.953020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.735 [2024-07-23 06:27:00.953030] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.735 [2024-07-23 06:27:00.953618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.735 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.736 06:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.994 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.994 "name": "Existed_Raid", 00:13:48.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.994 "strip_size_kb": 0, 00:13:48.994 "state": "configuring", 00:13:48.994 "raid_level": "raid1", 00:13:48.994 "superblock": false, 00:13:48.994 "num_base_bdevs": 3, 00:13:48.994 "num_base_bdevs_discovered": 2, 00:13:48.994 "num_base_bdevs_operational": 3, 00:13:48.994 "base_bdevs_list": [ 00:13:48.994 { 00:13:48.994 "name": "BaseBdev1", 00:13:48.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.994 "is_configured": false, 00:13:48.994 "data_offset": 0, 00:13:48.994 "data_size": 0 00:13:48.994 }, 00:13:48.994 { 00:13:48.994 "name": "BaseBdev2", 00:13:48.994 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:48.994 "is_configured": true, 00:13:48.994 "data_offset": 0, 00:13:48.994 "data_size": 65536 00:13:48.994 }, 00:13:48.994 { 00:13:48.994 "name": "BaseBdev3", 00:13:48.994 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:48.994 "is_configured": true, 00:13:48.994 "data_offset": 0, 00:13:48.994 "data_size": 65536 00:13:48.994 } 00:13:48.994 ] 00:13:48.994 }' 00:13:48.994 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.994 06:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.252 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:49.510 [2024-07-23 06:27:01.845008] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.510 06:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.768 06:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.768 "name": "Existed_Raid", 00:13:49.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.768 "strip_size_kb": 0, 00:13:49.768 "state": "configuring", 00:13:49.768 "raid_level": "raid1", 00:13:49.768 "superblock": false, 00:13:49.768 "num_base_bdevs": 3, 00:13:49.768 "num_base_bdevs_discovered": 1, 00:13:49.768 "num_base_bdevs_operational": 3, 00:13:49.768 "base_bdevs_list": [ 00:13:49.768 { 00:13:49.768 "name": "BaseBdev1", 00:13:49.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.768 "is_configured": false, 00:13:49.768 "data_offset": 0, 00:13:49.768 "data_size": 0 00:13:49.768 }, 00:13:49.768 { 00:13:49.768 "name": null, 00:13:49.768 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:49.768 "is_configured": false, 00:13:49.768 "data_offset": 0, 00:13:49.768 "data_size": 65536 00:13:49.768 }, 00:13:49.768 { 00:13:49.768 "name": "BaseBdev3", 00:13:49.768 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:49.768 "is_configured": true, 00:13:49.768 "data_offset": 0, 00:13:49.768 "data_size": 65536 00:13:49.768 } 00:13:49.768 ] 00:13:49.768 }' 00:13:49.768 06:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.768 06:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.027 06:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.027 06:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.591 06:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:50.591 06:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.591 [2024-07-23 06:27:03.113194] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.850 BaseBdev1 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:50.850 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.108 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.366 [ 00:13:51.366 { 00:13:51.366 "name": "BaseBdev1", 00:13:51.366 "aliases": [ 00:13:51.366 "931f560d-48bc-11ef-a06c-59ddad71024c" 00:13:51.366 ], 00:13:51.366 "product_name": "Malloc disk", 00:13:51.366 "block_size": 512, 00:13:51.366 "num_blocks": 65536, 00:13:51.366 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:51.366 "assigned_rate_limits": { 00:13:51.366 "rw_ios_per_sec": 0, 00:13:51.366 "rw_mbytes_per_sec": 0, 00:13:51.366 "r_mbytes_per_sec": 0, 00:13:51.366 "w_mbytes_per_sec": 0 00:13:51.366 }, 00:13:51.366 "claimed": true, 00:13:51.366 "claim_type": "exclusive_write", 00:13:51.366 "zoned": false, 00:13:51.366 "supported_io_types": { 00:13:51.366 "read": true, 00:13:51.366 "write": true, 00:13:51.366 "unmap": true, 00:13:51.366 "flush": true, 00:13:51.366 "reset": true, 00:13:51.366 "nvme_admin": false, 00:13:51.366 "nvme_io": false, 00:13:51.366 "nvme_io_md": false, 00:13:51.366 "write_zeroes": true, 00:13:51.366 "zcopy": true, 00:13:51.366 "get_zone_info": false, 00:13:51.366 "zone_management": false, 00:13:51.366 "zone_append": false, 00:13:51.366 "compare": false, 00:13:51.366 "compare_and_write": false, 00:13:51.366 "abort": true, 00:13:51.366 "seek_hole": false, 00:13:51.366 "seek_data": false, 00:13:51.366 "copy": true, 00:13:51.366 "nvme_iov_md": false 00:13:51.366 }, 00:13:51.366 "memory_domains": [ 00:13:51.366 { 00:13:51.366 "dma_device_id": "system", 00:13:51.366 "dma_device_type": 1 00:13:51.366 }, 00:13:51.366 { 00:13:51.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.366 "dma_device_type": 2 00:13:51.366 } 00:13:51.366 ], 00:13:51.366 "driver_specific": {} 00:13:51.366 } 00:13:51.366 ] 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.366 06:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.661 06:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.661 "name": "Existed_Raid", 00:13:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.661 "strip_size_kb": 0, 00:13:51.661 "state": "configuring", 00:13:51.661 "raid_level": "raid1", 00:13:51.661 "superblock": false, 00:13:51.661 "num_base_bdevs": 3, 00:13:51.661 "num_base_bdevs_discovered": 2, 00:13:51.661 "num_base_bdevs_operational": 3, 00:13:51.661 "base_bdevs_list": [ 00:13:51.661 { 00:13:51.661 "name": "BaseBdev1", 00:13:51.661 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:51.661 "is_configured": true, 00:13:51.661 "data_offset": 0, 00:13:51.661 "data_size": 65536 00:13:51.661 }, 00:13:51.661 { 00:13:51.661 "name": null, 00:13:51.661 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:51.661 "is_configured": false, 00:13:51.661 "data_offset": 0, 00:13:51.661 "data_size": 65536 00:13:51.661 }, 00:13:51.661 { 00:13:51.661 "name": "BaseBdev3", 00:13:51.661 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:51.661 "is_configured": true, 00:13:51.661 "data_offset": 0, 00:13:51.661 "data_size": 65536 00:13:51.661 } 00:13:51.661 ] 00:13:51.661 }' 00:13:51.661 06:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.661 06:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.918 06:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.918 06:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:52.484 06:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:52.484 06:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:52.484 [2024-07-23 06:27:04.989115] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.742 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.998 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.998 "name": "Existed_Raid", 00:13:52.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.998 "strip_size_kb": 0, 00:13:52.999 "state": "configuring", 00:13:52.999 "raid_level": "raid1", 00:13:52.999 "superblock": false, 00:13:52.999 "num_base_bdevs": 3, 00:13:52.999 "num_base_bdevs_discovered": 1, 00:13:52.999 "num_base_bdevs_operational": 3, 00:13:52.999 "base_bdevs_list": [ 00:13:52.999 { 00:13:52.999 "name": "BaseBdev1", 00:13:52.999 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:52.999 "is_configured": true, 00:13:52.999 "data_offset": 0, 00:13:52.999 "data_size": 65536 00:13:52.999 }, 00:13:52.999 { 00:13:52.999 "name": null, 00:13:52.999 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:52.999 "is_configured": false, 00:13:52.999 "data_offset": 0, 00:13:52.999 "data_size": 65536 00:13:52.999 }, 00:13:52.999 { 00:13:52.999 "name": null, 00:13:52.999 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:52.999 "is_configured": false, 00:13:52.999 "data_offset": 0, 00:13:52.999 "data_size": 65536 00:13:52.999 } 00:13:52.999 ] 00:13:52.999 }' 00:13:52.999 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.999 06:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.257 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:53.257 06:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.514 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:53.514 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:54.098 [2024-07-23 06:27:06.329157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.098 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.363 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.363 "name": "Existed_Raid", 00:13:54.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.363 "strip_size_kb": 0, 00:13:54.363 "state": "configuring", 00:13:54.363 "raid_level": "raid1", 00:13:54.363 "superblock": false, 00:13:54.364 "num_base_bdevs": 3, 00:13:54.364 "num_base_bdevs_discovered": 2, 00:13:54.364 "num_base_bdevs_operational": 3, 00:13:54.364 "base_bdevs_list": [ 00:13:54.364 { 00:13:54.364 "name": "BaseBdev1", 00:13:54.364 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:54.364 "is_configured": true, 00:13:54.364 "data_offset": 0, 00:13:54.364 "data_size": 65536 00:13:54.364 }, 00:13:54.364 { 00:13:54.364 "name": null, 00:13:54.364 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:54.364 "is_configured": false, 00:13:54.364 "data_offset": 0, 00:13:54.364 "data_size": 65536 00:13:54.364 }, 00:13:54.364 { 00:13:54.364 "name": "BaseBdev3", 00:13:54.364 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:54.364 "is_configured": true, 00:13:54.364 "data_offset": 0, 00:13:54.364 "data_size": 65536 00:13:54.364 } 00:13:54.364 ] 00:13:54.364 }' 00:13:54.364 06:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.364 06:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.622 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.622 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.881 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:54.881 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:55.139 [2024-07-23 06:27:07.653288] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.397 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.656 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.656 "name": "Existed_Raid", 00:13:55.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.656 "strip_size_kb": 0, 00:13:55.656 "state": "configuring", 00:13:55.656 "raid_level": "raid1", 00:13:55.656 "superblock": false, 00:13:55.656 "num_base_bdevs": 3, 00:13:55.656 "num_base_bdevs_discovered": 1, 00:13:55.656 "num_base_bdevs_operational": 3, 00:13:55.656 "base_bdevs_list": [ 00:13:55.656 { 00:13:55.656 "name": null, 00:13:55.656 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:55.656 "is_configured": false, 00:13:55.656 "data_offset": 0, 00:13:55.656 "data_size": 65536 00:13:55.656 }, 00:13:55.656 { 00:13:55.656 "name": null, 00:13:55.656 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:55.656 "is_configured": false, 00:13:55.656 "data_offset": 0, 00:13:55.656 "data_size": 65536 00:13:55.656 }, 00:13:55.656 { 00:13:55.656 "name": "BaseBdev3", 00:13:55.656 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:55.656 "is_configured": true, 00:13:55.656 "data_offset": 0, 00:13:55.656 "data_size": 65536 00:13:55.656 } 00:13:55.656 ] 00:13:55.656 }' 00:13:55.656 06:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.656 06:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.914 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.914 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.172 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:56.172 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:56.430 [2024-07-23 06:27:08.867968] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.430 06:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.689 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.689 "name": "Existed_Raid", 00:13:56.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.689 "strip_size_kb": 0, 00:13:56.689 "state": "configuring", 00:13:56.689 "raid_level": "raid1", 00:13:56.689 "superblock": false, 00:13:56.689 "num_base_bdevs": 3, 00:13:56.689 "num_base_bdevs_discovered": 2, 00:13:56.689 "num_base_bdevs_operational": 3, 00:13:56.689 "base_bdevs_list": [ 00:13:56.689 { 00:13:56.689 "name": null, 00:13:56.689 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:56.689 "is_configured": false, 00:13:56.689 "data_offset": 0, 00:13:56.689 "data_size": 65536 00:13:56.689 }, 00:13:56.689 { 00:13:56.689 "name": "BaseBdev2", 00:13:56.689 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:56.689 "is_configured": true, 00:13:56.689 "data_offset": 0, 00:13:56.689 "data_size": 65536 00:13:56.689 }, 00:13:56.689 { 00:13:56.689 "name": "BaseBdev3", 00:13:56.689 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:56.689 "is_configured": true, 00:13:56.689 "data_offset": 0, 00:13:56.689 "data_size": 65536 00:13:56.689 } 00:13:56.689 ] 00:13:56.689 }' 00:13:56.689 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.689 06:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.256 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.256 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:57.514 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:57.514 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.514 06:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:57.773 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 931f560d-48bc-11ef-a06c-59ddad71024c 00:13:58.031 [2024-07-23 06:27:10.412190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:58.031 [2024-07-23 06:27:10.412219] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x84fa7c34f00 00:13:58.031 [2024-07-23 06:27:10.412239] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:58.031 [2024-07-23 06:27:10.412261] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x84fa7c97e20 00:13:58.031 [2024-07-23 06:27:10.412329] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x84fa7c34f00 00:13:58.031 [2024-07-23 06:27:10.412333] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x84fa7c34f00 00:13:58.031 [2024-07-23 06:27:10.412369] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.031 NewBaseBdev 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:58.031 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.299 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:58.574 [ 00:13:58.574 { 00:13:58.574 "name": "NewBaseBdev", 00:13:58.574 "aliases": [ 00:13:58.574 "931f560d-48bc-11ef-a06c-59ddad71024c" 00:13:58.574 ], 00:13:58.574 "product_name": "Malloc disk", 00:13:58.574 "block_size": 512, 00:13:58.574 "num_blocks": 65536, 00:13:58.574 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:58.574 "assigned_rate_limits": { 00:13:58.574 "rw_ios_per_sec": 0, 00:13:58.574 "rw_mbytes_per_sec": 0, 00:13:58.574 "r_mbytes_per_sec": 0, 00:13:58.574 "w_mbytes_per_sec": 0 00:13:58.574 }, 00:13:58.574 "claimed": true, 00:13:58.574 "claim_type": "exclusive_write", 00:13:58.574 "zoned": false, 00:13:58.574 "supported_io_types": { 00:13:58.574 "read": true, 00:13:58.574 "write": true, 00:13:58.574 "unmap": true, 00:13:58.574 "flush": true, 00:13:58.574 "reset": true, 00:13:58.574 "nvme_admin": false, 00:13:58.574 "nvme_io": false, 00:13:58.574 "nvme_io_md": false, 00:13:58.574 "write_zeroes": true, 00:13:58.574 "zcopy": true, 00:13:58.574 "get_zone_info": false, 00:13:58.574 "zone_management": false, 00:13:58.574 "zone_append": false, 00:13:58.574 "compare": false, 00:13:58.574 "compare_and_write": false, 00:13:58.574 "abort": true, 00:13:58.574 "seek_hole": false, 00:13:58.574 "seek_data": false, 00:13:58.574 "copy": true, 00:13:58.574 "nvme_iov_md": false 00:13:58.574 }, 00:13:58.574 "memory_domains": [ 00:13:58.574 { 00:13:58.574 "dma_device_id": "system", 00:13:58.574 "dma_device_type": 1 00:13:58.574 }, 00:13:58.574 { 00:13:58.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.574 "dma_device_type": 2 00:13:58.574 } 00:13:58.574 ], 00:13:58.574 "driver_specific": {} 00:13:58.574 } 00:13:58.574 ] 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.574 06:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.833 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.833 "name": "Existed_Raid", 00:13:58.833 "uuid": "97791a53-48bc-11ef-a06c-59ddad71024c", 00:13:58.833 "strip_size_kb": 0, 00:13:58.833 "state": "online", 00:13:58.833 "raid_level": "raid1", 00:13:58.833 "superblock": false, 00:13:58.833 "num_base_bdevs": 3, 00:13:58.833 "num_base_bdevs_discovered": 3, 00:13:58.833 "num_base_bdevs_operational": 3, 00:13:58.833 "base_bdevs_list": [ 00:13:58.833 { 00:13:58.833 "name": "NewBaseBdev", 00:13:58.833 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:58.833 "is_configured": true, 00:13:58.833 "data_offset": 0, 00:13:58.833 "data_size": 65536 00:13:58.833 }, 00:13:58.833 { 00:13:58.833 "name": "BaseBdev2", 00:13:58.833 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:58.833 "is_configured": true, 00:13:58.833 "data_offset": 0, 00:13:58.833 "data_size": 65536 00:13:58.833 }, 00:13:58.833 { 00:13:58.833 "name": "BaseBdev3", 00:13:58.833 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:58.834 "is_configured": true, 00:13:58.834 "data_offset": 0, 00:13:58.834 "data_size": 65536 00:13:58.834 } 00:13:58.834 ] 00:13:58.834 }' 00:13:58.834 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.834 06:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:59.094 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:59.353 [2024-07-23 06:27:11.756130] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.353 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:59.353 "name": "Existed_Raid", 00:13:59.353 "aliases": [ 00:13:59.353 "97791a53-48bc-11ef-a06c-59ddad71024c" 00:13:59.353 ], 00:13:59.353 "product_name": "Raid Volume", 00:13:59.353 "block_size": 512, 00:13:59.353 "num_blocks": 65536, 00:13:59.353 "uuid": "97791a53-48bc-11ef-a06c-59ddad71024c", 00:13:59.353 "assigned_rate_limits": { 00:13:59.353 "rw_ios_per_sec": 0, 00:13:59.353 "rw_mbytes_per_sec": 0, 00:13:59.353 "r_mbytes_per_sec": 0, 00:13:59.353 "w_mbytes_per_sec": 0 00:13:59.353 }, 00:13:59.353 "claimed": false, 00:13:59.353 "zoned": false, 00:13:59.353 "supported_io_types": { 00:13:59.353 "read": true, 00:13:59.353 "write": true, 00:13:59.353 "unmap": false, 00:13:59.353 "flush": false, 00:13:59.353 "reset": true, 00:13:59.353 "nvme_admin": false, 00:13:59.353 "nvme_io": false, 00:13:59.353 "nvme_io_md": false, 00:13:59.353 "write_zeroes": true, 00:13:59.353 "zcopy": false, 00:13:59.353 "get_zone_info": false, 00:13:59.353 "zone_management": false, 00:13:59.353 "zone_append": false, 00:13:59.353 "compare": false, 00:13:59.353 "compare_and_write": false, 00:13:59.353 "abort": false, 00:13:59.353 "seek_hole": false, 00:13:59.353 "seek_data": false, 00:13:59.353 "copy": false, 00:13:59.353 "nvme_iov_md": false 00:13:59.353 }, 00:13:59.353 "memory_domains": [ 00:13:59.353 { 00:13:59.353 "dma_device_id": "system", 00:13:59.353 "dma_device_type": 1 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.353 "dma_device_type": 2 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "dma_device_id": "system", 00:13:59.353 "dma_device_type": 1 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.353 "dma_device_type": 2 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "dma_device_id": "system", 00:13:59.353 "dma_device_type": 1 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.353 "dma_device_type": 2 00:13:59.353 } 00:13:59.353 ], 00:13:59.353 "driver_specific": { 00:13:59.353 "raid": { 00:13:59.353 "uuid": "97791a53-48bc-11ef-a06c-59ddad71024c", 00:13:59.353 "strip_size_kb": 0, 00:13:59.353 "state": "online", 00:13:59.353 "raid_level": "raid1", 00:13:59.353 "superblock": false, 00:13:59.353 "num_base_bdevs": 3, 00:13:59.353 "num_base_bdevs_discovered": 3, 00:13:59.353 "num_base_bdevs_operational": 3, 00:13:59.353 "base_bdevs_list": [ 00:13:59.353 { 00:13:59.353 "name": "NewBaseBdev", 00:13:59.353 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:59.353 "is_configured": true, 00:13:59.353 "data_offset": 0, 00:13:59.353 "data_size": 65536 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "name": "BaseBdev2", 00:13:59.353 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:59.353 "is_configured": true, 00:13:59.353 "data_offset": 0, 00:13:59.353 "data_size": 65536 00:13:59.353 }, 00:13:59.353 { 00:13:59.353 "name": "BaseBdev3", 00:13:59.353 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:13:59.353 "is_configured": true, 00:13:59.353 "data_offset": 0, 00:13:59.353 "data_size": 65536 00:13:59.353 } 00:13:59.353 ] 00:13:59.353 } 00:13:59.353 } 00:13:59.353 }' 00:13:59.353 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.353 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:59.353 BaseBdev2 00:13:59.353 BaseBdev3' 00:13:59.353 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:59.353 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:59.353 06:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:59.611 "name": "NewBaseBdev", 00:13:59.611 "aliases": [ 00:13:59.611 "931f560d-48bc-11ef-a06c-59ddad71024c" 00:13:59.611 ], 00:13:59.611 "product_name": "Malloc disk", 00:13:59.611 "block_size": 512, 00:13:59.611 "num_blocks": 65536, 00:13:59.611 "uuid": "931f560d-48bc-11ef-a06c-59ddad71024c", 00:13:59.611 "assigned_rate_limits": { 00:13:59.611 "rw_ios_per_sec": 0, 00:13:59.611 "rw_mbytes_per_sec": 0, 00:13:59.611 "r_mbytes_per_sec": 0, 00:13:59.611 "w_mbytes_per_sec": 0 00:13:59.611 }, 00:13:59.611 "claimed": true, 00:13:59.611 "claim_type": "exclusive_write", 00:13:59.611 "zoned": false, 00:13:59.611 "supported_io_types": { 00:13:59.611 "read": true, 00:13:59.611 "write": true, 00:13:59.611 "unmap": true, 00:13:59.611 "flush": true, 00:13:59.611 "reset": true, 00:13:59.611 "nvme_admin": false, 00:13:59.611 "nvme_io": false, 00:13:59.611 "nvme_io_md": false, 00:13:59.611 "write_zeroes": true, 00:13:59.611 "zcopy": true, 00:13:59.611 "get_zone_info": false, 00:13:59.611 "zone_management": false, 00:13:59.611 "zone_append": false, 00:13:59.611 "compare": false, 00:13:59.611 "compare_and_write": false, 00:13:59.611 "abort": true, 00:13:59.611 "seek_hole": false, 00:13:59.611 "seek_data": false, 00:13:59.611 "copy": true, 00:13:59.611 "nvme_iov_md": false 00:13:59.611 }, 00:13:59.611 "memory_domains": [ 00:13:59.611 { 00:13:59.611 "dma_device_id": "system", 00:13:59.611 "dma_device_type": 1 00:13:59.611 }, 00:13:59.611 { 00:13:59.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.611 "dma_device_type": 2 00:13:59.611 } 00:13:59.611 ], 00:13:59.611 "driver_specific": {} 00:13:59.611 }' 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:59.611 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:59.612 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:59.871 "name": "BaseBdev2", 00:13:59.871 "aliases": [ 00:13:59.871 "90dc18a7-48bc-11ef-a06c-59ddad71024c" 00:13:59.871 ], 00:13:59.871 "product_name": "Malloc disk", 00:13:59.871 "block_size": 512, 00:13:59.871 "num_blocks": 65536, 00:13:59.871 "uuid": "90dc18a7-48bc-11ef-a06c-59ddad71024c", 00:13:59.871 "assigned_rate_limits": { 00:13:59.871 "rw_ios_per_sec": 0, 00:13:59.871 "rw_mbytes_per_sec": 0, 00:13:59.871 "r_mbytes_per_sec": 0, 00:13:59.871 "w_mbytes_per_sec": 0 00:13:59.871 }, 00:13:59.871 "claimed": true, 00:13:59.871 "claim_type": "exclusive_write", 00:13:59.871 "zoned": false, 00:13:59.871 "supported_io_types": { 00:13:59.871 "read": true, 00:13:59.871 "write": true, 00:13:59.871 "unmap": true, 00:13:59.871 "flush": true, 00:13:59.871 "reset": true, 00:13:59.871 "nvme_admin": false, 00:13:59.871 "nvme_io": false, 00:13:59.871 "nvme_io_md": false, 00:13:59.871 "write_zeroes": true, 00:13:59.871 "zcopy": true, 00:13:59.871 "get_zone_info": false, 00:13:59.871 "zone_management": false, 00:13:59.871 "zone_append": false, 00:13:59.871 "compare": false, 00:13:59.871 "compare_and_write": false, 00:13:59.871 "abort": true, 00:13:59.871 "seek_hole": false, 00:13:59.871 "seek_data": false, 00:13:59.871 "copy": true, 00:13:59.871 "nvme_iov_md": false 00:13:59.871 }, 00:13:59.871 "memory_domains": [ 00:13:59.871 { 00:13:59.871 "dma_device_id": "system", 00:13:59.871 "dma_device_type": 1 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.871 "dma_device_type": 2 00:13:59.871 } 00:13:59.871 ], 00:13:59.871 "driver_specific": {} 00:13:59.871 }' 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:59.871 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:00.156 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:00.443 "name": "BaseBdev3", 00:14:00.443 "aliases": [ 00:14:00.443 "91562b17-48bc-11ef-a06c-59ddad71024c" 00:14:00.443 ], 00:14:00.443 "product_name": "Malloc disk", 00:14:00.443 "block_size": 512, 00:14:00.443 "num_blocks": 65536, 00:14:00.443 "uuid": "91562b17-48bc-11ef-a06c-59ddad71024c", 00:14:00.443 "assigned_rate_limits": { 00:14:00.443 "rw_ios_per_sec": 0, 00:14:00.443 "rw_mbytes_per_sec": 0, 00:14:00.443 "r_mbytes_per_sec": 0, 00:14:00.443 "w_mbytes_per_sec": 0 00:14:00.443 }, 00:14:00.443 "claimed": true, 00:14:00.443 "claim_type": "exclusive_write", 00:14:00.443 "zoned": false, 00:14:00.443 "supported_io_types": { 00:14:00.443 "read": true, 00:14:00.443 "write": true, 00:14:00.443 "unmap": true, 00:14:00.443 "flush": true, 00:14:00.443 "reset": true, 00:14:00.443 "nvme_admin": false, 00:14:00.443 "nvme_io": false, 00:14:00.443 "nvme_io_md": false, 00:14:00.443 "write_zeroes": true, 00:14:00.443 "zcopy": true, 00:14:00.443 "get_zone_info": false, 00:14:00.443 "zone_management": false, 00:14:00.443 "zone_append": false, 00:14:00.443 "compare": false, 00:14:00.443 "compare_and_write": false, 00:14:00.443 "abort": true, 00:14:00.443 "seek_hole": false, 00:14:00.443 "seek_data": false, 00:14:00.443 "copy": true, 00:14:00.443 "nvme_iov_md": false 00:14:00.443 }, 00:14:00.443 "memory_domains": [ 00:14:00.443 { 00:14:00.443 "dma_device_id": "system", 00:14:00.443 "dma_device_type": 1 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.443 "dma_device_type": 2 00:14:00.443 } 00:14:00.443 ], 00:14:00.443 "driver_specific": {} 00:14:00.443 }' 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:00.443 06:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:00.702 [2024-07-23 06:27:12.988117] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.702 [2024-07-23 06:27:12.988144] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.702 [2024-07-23 06:27:12.988167] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.702 [2024-07-23 06:27:12.988241] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.702 [2024-07-23 06:27:12.988246] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x84fa7c34f00 name Existed_Raid, state offline 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56144 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 56144 ']' 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 56144 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 56144 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:00.702 killing process with pid 56144 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56144' 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 56144 00:14:00.702 [2024-07-23 06:27:13.019710] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 56144 00:14:00.702 [2024-07-23 06:27:13.037503] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:00.702 00:14:00.702 real 0m25.492s 00:14:00.702 user 0m46.650s 00:14:00.702 sys 0m3.514s 00:14:00.702 ************************************ 00:14:00.702 END TEST raid_state_function_test 00:14:00.702 ************************************ 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.702 06:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 06:27:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:00.966 06:27:13 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:00.966 06:27:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:00.966 06:27:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.966 06:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 ************************************ 00:14:00.966 START TEST raid_state_function_test_sb 00:14:00.966 ************************************ 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56877 00:14:00.966 Process raid pid: 56877 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56877' 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56877 /var/tmp/spdk-raid.sock 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56877 ']' 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.966 06:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 [2024-07-23 06:27:13.284004] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:00.966 [2024-07-23 06:27:13.284180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:01.533 EAL: TSC is not safe to use in SMP mode 00:14:01.533 EAL: TSC is not invariant 00:14:01.533 [2024-07-23 06:27:13.819916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.533 [2024-07-23 06:27:13.904448] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:01.533 [2024-07-23 06:27:13.906546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.533 [2024-07-23 06:27:13.907355] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.533 [2024-07-23 06:27:13.907370] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.100 06:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.100 06:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:02.100 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:02.395 [2024-07-23 06:27:14.639108] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.395 [2024-07-23 06:27:14.639160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.395 [2024-07-23 06:27:14.639166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.395 [2024-07-23 06:27:14.639175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.395 [2024-07-23 06:27:14.639178] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.395 [2024-07-23 06:27:14.639186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.395 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.654 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.654 "name": "Existed_Raid", 00:14:02.654 "uuid": "99fe11e2-48bc-11ef-a06c-59ddad71024c", 00:14:02.654 "strip_size_kb": 0, 00:14:02.654 "state": "configuring", 00:14:02.654 "raid_level": "raid1", 00:14:02.654 "superblock": true, 00:14:02.654 "num_base_bdevs": 3, 00:14:02.654 "num_base_bdevs_discovered": 0, 00:14:02.654 "num_base_bdevs_operational": 3, 00:14:02.654 "base_bdevs_list": [ 00:14:02.654 { 00:14:02.654 "name": "BaseBdev1", 00:14:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.654 "is_configured": false, 00:14:02.654 "data_offset": 0, 00:14:02.654 "data_size": 0 00:14:02.654 }, 00:14:02.654 { 00:14:02.654 "name": "BaseBdev2", 00:14:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.654 "is_configured": false, 00:14:02.654 "data_offset": 0, 00:14:02.654 "data_size": 0 00:14:02.654 }, 00:14:02.654 { 00:14:02.654 "name": "BaseBdev3", 00:14:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.654 "is_configured": false, 00:14:02.654 "data_offset": 0, 00:14:02.654 "data_size": 0 00:14:02.654 } 00:14:02.654 ] 00:14:02.654 }' 00:14:02.654 06:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.654 06:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.913 06:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:03.172 [2024-07-23 06:27:15.543133] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.172 [2024-07-23 06:27:15.543161] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x226fe6434500 name Existed_Raid, state configuring 00:14:03.172 06:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:03.431 [2024-07-23 06:27:15.783209] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.431 [2024-07-23 06:27:15.783275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.431 [2024-07-23 06:27:15.783296] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.431 [2024-07-23 06:27:15.783305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.431 [2024-07-23 06:27:15.783308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.431 [2024-07-23 06:27:15.783316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.431 06:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.689 [2024-07-23 06:27:16.020250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.689 BaseBdev1 00:14:03.689 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:03.689 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:03.689 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:03.690 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:03.690 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:03.690 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:03.690 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.948 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:04.207 [ 00:14:04.207 { 00:14:04.207 "name": "BaseBdev1", 00:14:04.207 "aliases": [ 00:14:04.207 "9ad0a827-48bc-11ef-a06c-59ddad71024c" 00:14:04.207 ], 00:14:04.207 "product_name": "Malloc disk", 00:14:04.207 "block_size": 512, 00:14:04.207 "num_blocks": 65536, 00:14:04.207 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:04.207 "assigned_rate_limits": { 00:14:04.207 "rw_ios_per_sec": 0, 00:14:04.207 "rw_mbytes_per_sec": 0, 00:14:04.207 "r_mbytes_per_sec": 0, 00:14:04.207 "w_mbytes_per_sec": 0 00:14:04.207 }, 00:14:04.207 "claimed": true, 00:14:04.207 "claim_type": "exclusive_write", 00:14:04.207 "zoned": false, 00:14:04.207 "supported_io_types": { 00:14:04.207 "read": true, 00:14:04.207 "write": true, 00:14:04.207 "unmap": true, 00:14:04.207 "flush": true, 00:14:04.207 "reset": true, 00:14:04.207 "nvme_admin": false, 00:14:04.207 "nvme_io": false, 00:14:04.207 "nvme_io_md": false, 00:14:04.207 "write_zeroes": true, 00:14:04.207 "zcopy": true, 00:14:04.207 "get_zone_info": false, 00:14:04.207 "zone_management": false, 00:14:04.207 "zone_append": false, 00:14:04.207 "compare": false, 00:14:04.207 "compare_and_write": false, 00:14:04.207 "abort": true, 00:14:04.207 "seek_hole": false, 00:14:04.207 "seek_data": false, 00:14:04.207 "copy": true, 00:14:04.207 "nvme_iov_md": false 00:14:04.207 }, 00:14:04.207 "memory_domains": [ 00:14:04.207 { 00:14:04.207 "dma_device_id": "system", 00:14:04.207 "dma_device_type": 1 00:14:04.207 }, 00:14:04.207 { 00:14:04.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.207 "dma_device_type": 2 00:14:04.207 } 00:14:04.207 ], 00:14:04.207 "driver_specific": {} 00:14:04.207 } 00:14:04.207 ] 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.207 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.465 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.465 "name": "Existed_Raid", 00:14:04.465 "uuid": "9aaca554-48bc-11ef-a06c-59ddad71024c", 00:14:04.465 "strip_size_kb": 0, 00:14:04.465 "state": "configuring", 00:14:04.465 "raid_level": "raid1", 00:14:04.465 "superblock": true, 00:14:04.465 "num_base_bdevs": 3, 00:14:04.465 "num_base_bdevs_discovered": 1, 00:14:04.465 "num_base_bdevs_operational": 3, 00:14:04.465 "base_bdevs_list": [ 00:14:04.465 { 00:14:04.465 "name": "BaseBdev1", 00:14:04.465 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:04.465 "is_configured": true, 00:14:04.465 "data_offset": 2048, 00:14:04.465 "data_size": 63488 00:14:04.465 }, 00:14:04.465 { 00:14:04.465 "name": "BaseBdev2", 00:14:04.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.466 "is_configured": false, 00:14:04.466 "data_offset": 0, 00:14:04.466 "data_size": 0 00:14:04.466 }, 00:14:04.466 { 00:14:04.466 "name": "BaseBdev3", 00:14:04.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.466 "is_configured": false, 00:14:04.466 "data_offset": 0, 00:14:04.466 "data_size": 0 00:14:04.466 } 00:14:04.466 ] 00:14:04.466 }' 00:14:04.466 06:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.466 06:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.724 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:04.983 [2024-07-23 06:27:17.343245] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.983 [2024-07-23 06:27:17.343284] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x226fe6434500 name Existed_Raid, state configuring 00:14:04.983 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:05.241 [2024-07-23 06:27:17.599282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.241 [2024-07-23 06:27:17.600083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.241 [2024-07-23 06:27:17.600123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.241 [2024-07-23 06:27:17.600128] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.241 [2024-07-23 06:27:17.600137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.241 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.499 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.499 "name": "Existed_Raid", 00:14:05.499 "uuid": "9bc1c176-48bc-11ef-a06c-59ddad71024c", 00:14:05.499 "strip_size_kb": 0, 00:14:05.499 "state": "configuring", 00:14:05.499 "raid_level": "raid1", 00:14:05.499 "superblock": true, 00:14:05.499 "num_base_bdevs": 3, 00:14:05.499 "num_base_bdevs_discovered": 1, 00:14:05.499 "num_base_bdevs_operational": 3, 00:14:05.499 "base_bdevs_list": [ 00:14:05.499 { 00:14:05.499 "name": "BaseBdev1", 00:14:05.499 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:05.499 "is_configured": true, 00:14:05.499 "data_offset": 2048, 00:14:05.499 "data_size": 63488 00:14:05.499 }, 00:14:05.499 { 00:14:05.499 "name": "BaseBdev2", 00:14:05.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.499 "is_configured": false, 00:14:05.499 "data_offset": 0, 00:14:05.499 "data_size": 0 00:14:05.499 }, 00:14:05.499 { 00:14:05.499 "name": "BaseBdev3", 00:14:05.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.499 "is_configured": false, 00:14:05.499 "data_offset": 0, 00:14:05.499 "data_size": 0 00:14:05.499 } 00:14:05.499 ] 00:14:05.499 }' 00:14:05.499 06:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.499 06:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.757 06:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:06.017 [2024-07-23 06:27:18.483543] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.017 BaseBdev2 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:06.017 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.275 06:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:06.535 [ 00:14:06.535 { 00:14:06.535 "name": "BaseBdev2", 00:14:06.535 "aliases": [ 00:14:06.535 "9c48aa50-48bc-11ef-a06c-59ddad71024c" 00:14:06.535 ], 00:14:06.535 "product_name": "Malloc disk", 00:14:06.535 "block_size": 512, 00:14:06.535 "num_blocks": 65536, 00:14:06.535 "uuid": "9c48aa50-48bc-11ef-a06c-59ddad71024c", 00:14:06.535 "assigned_rate_limits": { 00:14:06.535 "rw_ios_per_sec": 0, 00:14:06.535 "rw_mbytes_per_sec": 0, 00:14:06.535 "r_mbytes_per_sec": 0, 00:14:06.535 "w_mbytes_per_sec": 0 00:14:06.535 }, 00:14:06.535 "claimed": true, 00:14:06.535 "claim_type": "exclusive_write", 00:14:06.535 "zoned": false, 00:14:06.535 "supported_io_types": { 00:14:06.535 "read": true, 00:14:06.535 "write": true, 00:14:06.535 "unmap": true, 00:14:06.535 "flush": true, 00:14:06.535 "reset": true, 00:14:06.535 "nvme_admin": false, 00:14:06.535 "nvme_io": false, 00:14:06.535 "nvme_io_md": false, 00:14:06.535 "write_zeroes": true, 00:14:06.535 "zcopy": true, 00:14:06.535 "get_zone_info": false, 00:14:06.535 "zone_management": false, 00:14:06.535 "zone_append": false, 00:14:06.535 "compare": false, 00:14:06.535 "compare_and_write": false, 00:14:06.535 "abort": true, 00:14:06.535 "seek_hole": false, 00:14:06.535 "seek_data": false, 00:14:06.535 "copy": true, 00:14:06.535 "nvme_iov_md": false 00:14:06.535 }, 00:14:06.535 "memory_domains": [ 00:14:06.535 { 00:14:06.535 "dma_device_id": "system", 00:14:06.535 "dma_device_type": 1 00:14:06.535 }, 00:14:06.535 { 00:14:06.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.535 "dma_device_type": 2 00:14:06.535 } 00:14:06.535 ], 00:14:06.535 "driver_specific": {} 00:14:06.535 } 00:14:06.535 ] 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.794 "name": "Existed_Raid", 00:14:06.794 "uuid": "9bc1c176-48bc-11ef-a06c-59ddad71024c", 00:14:06.794 "strip_size_kb": 0, 00:14:06.794 "state": "configuring", 00:14:06.794 "raid_level": "raid1", 00:14:06.794 "superblock": true, 00:14:06.794 "num_base_bdevs": 3, 00:14:06.794 "num_base_bdevs_discovered": 2, 00:14:06.794 "num_base_bdevs_operational": 3, 00:14:06.794 "base_bdevs_list": [ 00:14:06.794 { 00:14:06.794 "name": "BaseBdev1", 00:14:06.794 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:06.794 "is_configured": true, 00:14:06.794 "data_offset": 2048, 00:14:06.794 "data_size": 63488 00:14:06.794 }, 00:14:06.794 { 00:14:06.794 "name": "BaseBdev2", 00:14:06.794 "uuid": "9c48aa50-48bc-11ef-a06c-59ddad71024c", 00:14:06.794 "is_configured": true, 00:14:06.794 "data_offset": 2048, 00:14:06.794 "data_size": 63488 00:14:06.794 }, 00:14:06.794 { 00:14:06.794 "name": "BaseBdev3", 00:14:06.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.794 "is_configured": false, 00:14:06.794 "data_offset": 0, 00:14:06.794 "data_size": 0 00:14:06.794 } 00:14:06.794 ] 00:14:06.794 }' 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.794 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.360 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.619 [2024-07-23 06:27:19.927612] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.619 [2024-07-23 06:27:19.927707] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x226fe6434a00 00:14:07.619 [2024-07-23 06:27:19.927713] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:07.619 [2024-07-23 06:27:19.927734] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x226fe6497e20 00:14:07.619 [2024-07-23 06:27:19.927787] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x226fe6434a00 00:14:07.619 [2024-07-23 06:27:19.927791] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x226fe6434a00 00:14:07.619 [2024-07-23 06:27:19.927810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.619 BaseBdev3 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:07.619 06:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.877 06:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:08.135 [ 00:14:08.135 { 00:14:08.135 "name": "BaseBdev3", 00:14:08.135 "aliases": [ 00:14:08.135 "9d2503dc-48bc-11ef-a06c-59ddad71024c" 00:14:08.135 ], 00:14:08.135 "product_name": "Malloc disk", 00:14:08.135 "block_size": 512, 00:14:08.135 "num_blocks": 65536, 00:14:08.135 "uuid": "9d2503dc-48bc-11ef-a06c-59ddad71024c", 00:14:08.135 "assigned_rate_limits": { 00:14:08.135 "rw_ios_per_sec": 0, 00:14:08.135 "rw_mbytes_per_sec": 0, 00:14:08.135 "r_mbytes_per_sec": 0, 00:14:08.135 "w_mbytes_per_sec": 0 00:14:08.135 }, 00:14:08.135 "claimed": true, 00:14:08.135 "claim_type": "exclusive_write", 00:14:08.135 "zoned": false, 00:14:08.135 "supported_io_types": { 00:14:08.135 "read": true, 00:14:08.135 "write": true, 00:14:08.135 "unmap": true, 00:14:08.135 "flush": true, 00:14:08.135 "reset": true, 00:14:08.135 "nvme_admin": false, 00:14:08.135 "nvme_io": false, 00:14:08.135 "nvme_io_md": false, 00:14:08.135 "write_zeroes": true, 00:14:08.135 "zcopy": true, 00:14:08.135 "get_zone_info": false, 00:14:08.135 "zone_management": false, 00:14:08.135 "zone_append": false, 00:14:08.135 "compare": false, 00:14:08.135 "compare_and_write": false, 00:14:08.135 "abort": true, 00:14:08.135 "seek_hole": false, 00:14:08.135 "seek_data": false, 00:14:08.135 "copy": true, 00:14:08.135 "nvme_iov_md": false 00:14:08.135 }, 00:14:08.135 "memory_domains": [ 00:14:08.135 { 00:14:08.135 "dma_device_id": "system", 00:14:08.135 "dma_device_type": 1 00:14:08.135 }, 00:14:08.135 { 00:14:08.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.135 "dma_device_type": 2 00:14:08.135 } 00:14:08.135 ], 00:14:08.135 "driver_specific": {} 00:14:08.135 } 00:14:08.135 ] 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.135 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.394 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.394 "name": "Existed_Raid", 00:14:08.394 "uuid": "9bc1c176-48bc-11ef-a06c-59ddad71024c", 00:14:08.394 "strip_size_kb": 0, 00:14:08.394 "state": "online", 00:14:08.394 "raid_level": "raid1", 00:14:08.394 "superblock": true, 00:14:08.394 "num_base_bdevs": 3, 00:14:08.394 "num_base_bdevs_discovered": 3, 00:14:08.394 "num_base_bdevs_operational": 3, 00:14:08.394 "base_bdevs_list": [ 00:14:08.394 { 00:14:08.394 "name": "BaseBdev1", 00:14:08.394 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:08.394 "is_configured": true, 00:14:08.394 "data_offset": 2048, 00:14:08.394 "data_size": 63488 00:14:08.394 }, 00:14:08.394 { 00:14:08.394 "name": "BaseBdev2", 00:14:08.394 "uuid": "9c48aa50-48bc-11ef-a06c-59ddad71024c", 00:14:08.394 "is_configured": true, 00:14:08.394 "data_offset": 2048, 00:14:08.394 "data_size": 63488 00:14:08.394 }, 00:14:08.394 { 00:14:08.394 "name": "BaseBdev3", 00:14:08.394 "uuid": "9d2503dc-48bc-11ef-a06c-59ddad71024c", 00:14:08.394 "is_configured": true, 00:14:08.394 "data_offset": 2048, 00:14:08.394 "data_size": 63488 00:14:08.394 } 00:14:08.394 ] 00:14:08.394 }' 00:14:08.394 06:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.394 06:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:08.689 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:08.947 [2024-07-23 06:27:21.267666] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.947 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:08.947 "name": "Existed_Raid", 00:14:08.947 "aliases": [ 00:14:08.947 "9bc1c176-48bc-11ef-a06c-59ddad71024c" 00:14:08.947 ], 00:14:08.947 "product_name": "Raid Volume", 00:14:08.947 "block_size": 512, 00:14:08.947 "num_blocks": 63488, 00:14:08.947 "uuid": "9bc1c176-48bc-11ef-a06c-59ddad71024c", 00:14:08.947 "assigned_rate_limits": { 00:14:08.947 "rw_ios_per_sec": 0, 00:14:08.947 "rw_mbytes_per_sec": 0, 00:14:08.947 "r_mbytes_per_sec": 0, 00:14:08.947 "w_mbytes_per_sec": 0 00:14:08.947 }, 00:14:08.947 "claimed": false, 00:14:08.947 "zoned": false, 00:14:08.947 "supported_io_types": { 00:14:08.947 "read": true, 00:14:08.947 "write": true, 00:14:08.947 "unmap": false, 00:14:08.947 "flush": false, 00:14:08.947 "reset": true, 00:14:08.947 "nvme_admin": false, 00:14:08.947 "nvme_io": false, 00:14:08.947 "nvme_io_md": false, 00:14:08.947 "write_zeroes": true, 00:14:08.947 "zcopy": false, 00:14:08.947 "get_zone_info": false, 00:14:08.947 "zone_management": false, 00:14:08.947 "zone_append": false, 00:14:08.947 "compare": false, 00:14:08.947 "compare_and_write": false, 00:14:08.947 "abort": false, 00:14:08.947 "seek_hole": false, 00:14:08.947 "seek_data": false, 00:14:08.947 "copy": false, 00:14:08.947 "nvme_iov_md": false 00:14:08.947 }, 00:14:08.947 "memory_domains": [ 00:14:08.947 { 00:14:08.947 "dma_device_id": "system", 00:14:08.947 "dma_device_type": 1 00:14:08.947 }, 00:14:08.947 { 00:14:08.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.947 "dma_device_type": 2 00:14:08.947 }, 00:14:08.947 { 00:14:08.947 "dma_device_id": "system", 00:14:08.947 "dma_device_type": 1 00:14:08.947 }, 00:14:08.947 { 00:14:08.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.947 "dma_device_type": 2 00:14:08.947 }, 00:14:08.947 { 00:14:08.947 "dma_device_id": "system", 00:14:08.947 "dma_device_type": 1 00:14:08.947 }, 00:14:08.947 { 00:14:08.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.948 "dma_device_type": 2 00:14:08.948 } 00:14:08.948 ], 00:14:08.948 "driver_specific": { 00:14:08.948 "raid": { 00:14:08.948 "uuid": "9bc1c176-48bc-11ef-a06c-59ddad71024c", 00:14:08.948 "strip_size_kb": 0, 00:14:08.948 "state": "online", 00:14:08.948 "raid_level": "raid1", 00:14:08.948 "superblock": true, 00:14:08.948 "num_base_bdevs": 3, 00:14:08.948 "num_base_bdevs_discovered": 3, 00:14:08.948 "num_base_bdevs_operational": 3, 00:14:08.948 "base_bdevs_list": [ 00:14:08.948 { 00:14:08.948 "name": "BaseBdev1", 00:14:08.948 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:08.948 "is_configured": true, 00:14:08.948 "data_offset": 2048, 00:14:08.948 "data_size": 63488 00:14:08.948 }, 00:14:08.948 { 00:14:08.948 "name": "BaseBdev2", 00:14:08.948 "uuid": "9c48aa50-48bc-11ef-a06c-59ddad71024c", 00:14:08.948 "is_configured": true, 00:14:08.948 "data_offset": 2048, 00:14:08.948 "data_size": 63488 00:14:08.948 }, 00:14:08.948 { 00:14:08.948 "name": "BaseBdev3", 00:14:08.948 "uuid": "9d2503dc-48bc-11ef-a06c-59ddad71024c", 00:14:08.948 "is_configured": true, 00:14:08.948 "data_offset": 2048, 00:14:08.948 "data_size": 63488 00:14:08.948 } 00:14:08.948 ] 00:14:08.948 } 00:14:08.948 } 00:14:08.948 }' 00:14:08.948 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.948 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:08.948 BaseBdev2 00:14:08.948 BaseBdev3' 00:14:08.948 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.948 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:08.948 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.208 "name": "BaseBdev1", 00:14:09.208 "aliases": [ 00:14:09.208 "9ad0a827-48bc-11ef-a06c-59ddad71024c" 00:14:09.208 ], 00:14:09.208 "product_name": "Malloc disk", 00:14:09.208 "block_size": 512, 00:14:09.208 "num_blocks": 65536, 00:14:09.208 "uuid": "9ad0a827-48bc-11ef-a06c-59ddad71024c", 00:14:09.208 "assigned_rate_limits": { 00:14:09.208 "rw_ios_per_sec": 0, 00:14:09.208 "rw_mbytes_per_sec": 0, 00:14:09.208 "r_mbytes_per_sec": 0, 00:14:09.208 "w_mbytes_per_sec": 0 00:14:09.208 }, 00:14:09.208 "claimed": true, 00:14:09.208 "claim_type": "exclusive_write", 00:14:09.208 "zoned": false, 00:14:09.208 "supported_io_types": { 00:14:09.208 "read": true, 00:14:09.208 "write": true, 00:14:09.208 "unmap": true, 00:14:09.208 "flush": true, 00:14:09.208 "reset": true, 00:14:09.208 "nvme_admin": false, 00:14:09.208 "nvme_io": false, 00:14:09.208 "nvme_io_md": false, 00:14:09.208 "write_zeroes": true, 00:14:09.208 "zcopy": true, 00:14:09.208 "get_zone_info": false, 00:14:09.208 "zone_management": false, 00:14:09.208 "zone_append": false, 00:14:09.208 "compare": false, 00:14:09.208 "compare_and_write": false, 00:14:09.208 "abort": true, 00:14:09.208 "seek_hole": false, 00:14:09.208 "seek_data": false, 00:14:09.208 "copy": true, 00:14:09.208 "nvme_iov_md": false 00:14:09.208 }, 00:14:09.208 "memory_domains": [ 00:14:09.208 { 00:14:09.208 "dma_device_id": "system", 00:14:09.208 "dma_device_type": 1 00:14:09.208 }, 00:14:09.208 { 00:14:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.208 "dma_device_type": 2 00:14:09.208 } 00:14:09.208 ], 00:14:09.208 "driver_specific": {} 00:14:09.208 }' 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:09.208 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.467 "name": "BaseBdev2", 00:14:09.467 "aliases": [ 00:14:09.467 "9c48aa50-48bc-11ef-a06c-59ddad71024c" 00:14:09.467 ], 00:14:09.467 "product_name": "Malloc disk", 00:14:09.467 "block_size": 512, 00:14:09.467 "num_blocks": 65536, 00:14:09.467 "uuid": "9c48aa50-48bc-11ef-a06c-59ddad71024c", 00:14:09.467 "assigned_rate_limits": { 00:14:09.467 "rw_ios_per_sec": 0, 00:14:09.467 "rw_mbytes_per_sec": 0, 00:14:09.467 "r_mbytes_per_sec": 0, 00:14:09.467 "w_mbytes_per_sec": 0 00:14:09.467 }, 00:14:09.467 "claimed": true, 00:14:09.467 "claim_type": "exclusive_write", 00:14:09.467 "zoned": false, 00:14:09.467 "supported_io_types": { 00:14:09.467 "read": true, 00:14:09.467 "write": true, 00:14:09.467 "unmap": true, 00:14:09.467 "flush": true, 00:14:09.467 "reset": true, 00:14:09.467 "nvme_admin": false, 00:14:09.467 "nvme_io": false, 00:14:09.467 "nvme_io_md": false, 00:14:09.467 "write_zeroes": true, 00:14:09.467 "zcopy": true, 00:14:09.467 "get_zone_info": false, 00:14:09.467 "zone_management": false, 00:14:09.467 "zone_append": false, 00:14:09.467 "compare": false, 00:14:09.467 "compare_and_write": false, 00:14:09.467 "abort": true, 00:14:09.467 "seek_hole": false, 00:14:09.467 "seek_data": false, 00:14:09.467 "copy": true, 00:14:09.467 "nvme_iov_md": false 00:14:09.467 }, 00:14:09.467 "memory_domains": [ 00:14:09.467 { 00:14:09.467 "dma_device_id": "system", 00:14:09.467 "dma_device_type": 1 00:14:09.467 }, 00:14:09.467 { 00:14:09.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.467 "dma_device_type": 2 00:14:09.467 } 00:14:09.467 ], 00:14:09.467 "driver_specific": {} 00:14:09.467 }' 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:09.467 06:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.726 "name": "BaseBdev3", 00:14:09.726 "aliases": [ 00:14:09.726 "9d2503dc-48bc-11ef-a06c-59ddad71024c" 00:14:09.726 ], 00:14:09.726 "product_name": "Malloc disk", 00:14:09.726 "block_size": 512, 00:14:09.726 "num_blocks": 65536, 00:14:09.726 "uuid": "9d2503dc-48bc-11ef-a06c-59ddad71024c", 00:14:09.726 "assigned_rate_limits": { 00:14:09.726 "rw_ios_per_sec": 0, 00:14:09.726 "rw_mbytes_per_sec": 0, 00:14:09.726 "r_mbytes_per_sec": 0, 00:14:09.726 "w_mbytes_per_sec": 0 00:14:09.726 }, 00:14:09.726 "claimed": true, 00:14:09.726 "claim_type": "exclusive_write", 00:14:09.726 "zoned": false, 00:14:09.726 "supported_io_types": { 00:14:09.726 "read": true, 00:14:09.726 "write": true, 00:14:09.726 "unmap": true, 00:14:09.726 "flush": true, 00:14:09.726 "reset": true, 00:14:09.726 "nvme_admin": false, 00:14:09.726 "nvme_io": false, 00:14:09.726 "nvme_io_md": false, 00:14:09.726 "write_zeroes": true, 00:14:09.726 "zcopy": true, 00:14:09.726 "get_zone_info": false, 00:14:09.726 "zone_management": false, 00:14:09.726 "zone_append": false, 00:14:09.726 "compare": false, 00:14:09.726 "compare_and_write": false, 00:14:09.726 "abort": true, 00:14:09.726 "seek_hole": false, 00:14:09.726 "seek_data": false, 00:14:09.726 "copy": true, 00:14:09.726 "nvme_iov_md": false 00:14:09.726 }, 00:14:09.726 "memory_domains": [ 00:14:09.726 { 00:14:09.726 "dma_device_id": "system", 00:14:09.726 "dma_device_type": 1 00:14:09.726 }, 00:14:09.726 { 00:14:09.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.726 "dma_device_type": 2 00:14:09.726 } 00:14:09.726 ], 00:14:09.726 "driver_specific": {} 00:14:09.726 }' 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.726 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.727 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.727 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.727 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.727 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.727 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:10.293 [2024-07-23 06:27:22.515792] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.293 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.294 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.552 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.552 "name": "Existed_Raid", 00:14:10.552 "uuid": "9bc1c176-48bc-11ef-a06c-59ddad71024c", 00:14:10.552 "strip_size_kb": 0, 00:14:10.552 "state": "online", 00:14:10.552 "raid_level": "raid1", 00:14:10.552 "superblock": true, 00:14:10.552 "num_base_bdevs": 3, 00:14:10.552 "num_base_bdevs_discovered": 2, 00:14:10.552 "num_base_bdevs_operational": 2, 00:14:10.552 "base_bdevs_list": [ 00:14:10.552 { 00:14:10.552 "name": null, 00:14:10.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.552 "is_configured": false, 00:14:10.552 "data_offset": 2048, 00:14:10.552 "data_size": 63488 00:14:10.552 }, 00:14:10.552 { 00:14:10.552 "name": "BaseBdev2", 00:14:10.552 "uuid": "9c48aa50-48bc-11ef-a06c-59ddad71024c", 00:14:10.552 "is_configured": true, 00:14:10.552 "data_offset": 2048, 00:14:10.552 "data_size": 63488 00:14:10.552 }, 00:14:10.552 { 00:14:10.552 "name": "BaseBdev3", 00:14:10.552 "uuid": "9d2503dc-48bc-11ef-a06c-59ddad71024c", 00:14:10.552 "is_configured": true, 00:14:10.552 "data_offset": 2048, 00:14:10.552 "data_size": 63488 00:14:10.552 } 00:14:10.552 ] 00:14:10.552 }' 00:14:10.552 06:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.552 06:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.811 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:10.811 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.811 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.811 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:11.069 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:11.069 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.069 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:11.328 [2024-07-23 06:27:23.670131] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:11.328 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:11.328 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:11.328 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.328 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:11.587 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:11.587 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.587 06:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:11.846 [2024-07-23 06:27:24.196415] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:11.846 [2024-07-23 06:27:24.196451] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.846 [2024-07-23 06:27:24.202859] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.846 [2024-07-23 06:27:24.202890] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.846 [2024-07-23 06:27:24.202895] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x226fe6434a00 name Existed_Raid, state offline 00:14:11.846 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:11.846 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:11.846 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.846 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.105 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:12.105 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:12.105 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:12.105 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:12.105 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.105 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.364 BaseBdev2 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:12.364 06:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.623 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.881 [ 00:14:12.881 { 00:14:12.881 "name": "BaseBdev2", 00:14:12.881 "aliases": [ 00:14:12.881 "a0077233-48bc-11ef-a06c-59ddad71024c" 00:14:12.881 ], 00:14:12.881 "product_name": "Malloc disk", 00:14:12.881 "block_size": 512, 00:14:12.881 "num_blocks": 65536, 00:14:12.882 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:12.882 "assigned_rate_limits": { 00:14:12.882 "rw_ios_per_sec": 0, 00:14:12.882 "rw_mbytes_per_sec": 0, 00:14:12.882 "r_mbytes_per_sec": 0, 00:14:12.882 "w_mbytes_per_sec": 0 00:14:12.882 }, 00:14:12.882 "claimed": false, 00:14:12.882 "zoned": false, 00:14:12.882 "supported_io_types": { 00:14:12.882 "read": true, 00:14:12.882 "write": true, 00:14:12.882 "unmap": true, 00:14:12.882 "flush": true, 00:14:12.882 "reset": true, 00:14:12.882 "nvme_admin": false, 00:14:12.882 "nvme_io": false, 00:14:12.882 "nvme_io_md": false, 00:14:12.882 "write_zeroes": true, 00:14:12.882 "zcopy": true, 00:14:12.882 "get_zone_info": false, 00:14:12.882 "zone_management": false, 00:14:12.882 "zone_append": false, 00:14:12.882 "compare": false, 00:14:12.882 "compare_and_write": false, 00:14:12.882 "abort": true, 00:14:12.882 "seek_hole": false, 00:14:12.882 "seek_data": false, 00:14:12.882 "copy": true, 00:14:12.882 "nvme_iov_md": false 00:14:12.882 }, 00:14:12.882 "memory_domains": [ 00:14:12.882 { 00:14:12.882 "dma_device_id": "system", 00:14:12.882 "dma_device_type": 1 00:14:12.882 }, 00:14:12.882 { 00:14:12.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.882 "dma_device_type": 2 00:14:12.882 } 00:14:12.882 ], 00:14:12.882 "driver_specific": {} 00:14:12.882 } 00:14:12.882 ] 00:14:12.882 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:12.882 06:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:12.882 06:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.882 06:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.140 BaseBdev3 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:13.140 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:13.727 06:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.727 [ 00:14:13.727 { 00:14:13.727 "name": "BaseBdev3", 00:14:13.727 "aliases": [ 00:14:13.727 "a08a11fa-48bc-11ef-a06c-59ddad71024c" 00:14:13.727 ], 00:14:13.727 "product_name": "Malloc disk", 00:14:13.727 "block_size": 512, 00:14:13.727 "num_blocks": 65536, 00:14:13.727 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:13.727 "assigned_rate_limits": { 00:14:13.727 "rw_ios_per_sec": 0, 00:14:13.727 "rw_mbytes_per_sec": 0, 00:14:13.727 "r_mbytes_per_sec": 0, 00:14:13.727 "w_mbytes_per_sec": 0 00:14:13.727 }, 00:14:13.727 "claimed": false, 00:14:13.727 "zoned": false, 00:14:13.727 "supported_io_types": { 00:14:13.727 "read": true, 00:14:13.727 "write": true, 00:14:13.727 "unmap": true, 00:14:13.727 "flush": true, 00:14:13.727 "reset": true, 00:14:13.727 "nvme_admin": false, 00:14:13.727 "nvme_io": false, 00:14:13.727 "nvme_io_md": false, 00:14:13.727 "write_zeroes": true, 00:14:13.727 "zcopy": true, 00:14:13.727 "get_zone_info": false, 00:14:13.727 "zone_management": false, 00:14:13.727 "zone_append": false, 00:14:13.727 "compare": false, 00:14:13.727 "compare_and_write": false, 00:14:13.727 "abort": true, 00:14:13.727 "seek_hole": false, 00:14:13.727 "seek_data": false, 00:14:13.727 "copy": true, 00:14:13.727 "nvme_iov_md": false 00:14:13.727 }, 00:14:13.727 "memory_domains": [ 00:14:13.727 { 00:14:13.727 "dma_device_id": "system", 00:14:13.727 "dma_device_type": 1 00:14:13.727 }, 00:14:13.727 { 00:14:13.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.727 "dma_device_type": 2 00:14:13.727 } 00:14:13.727 ], 00:14:13.727 "driver_specific": {} 00:14:13.727 } 00:14:13.727 ] 00:14:13.727 06:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:13.727 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:13.727 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:13.727 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:13.985 [2024-07-23 06:27:26.494982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.985 [2024-07-23 06:27:26.495034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.985 [2024-07-23 06:27:26.495045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.985 [2024-07-23 06:27:26.495633] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.245 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.505 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.505 "name": "Existed_Raid", 00:14:14.505 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:14.505 "strip_size_kb": 0, 00:14:14.505 "state": "configuring", 00:14:14.505 "raid_level": "raid1", 00:14:14.505 "superblock": true, 00:14:14.505 "num_base_bdevs": 3, 00:14:14.505 "num_base_bdevs_discovered": 2, 00:14:14.505 "num_base_bdevs_operational": 3, 00:14:14.505 "base_bdevs_list": [ 00:14:14.505 { 00:14:14.505 "name": "BaseBdev1", 00:14:14.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.505 "is_configured": false, 00:14:14.505 "data_offset": 0, 00:14:14.505 "data_size": 0 00:14:14.505 }, 00:14:14.505 { 00:14:14.505 "name": "BaseBdev2", 00:14:14.505 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:14.505 "is_configured": true, 00:14:14.505 "data_offset": 2048, 00:14:14.505 "data_size": 63488 00:14:14.505 }, 00:14:14.505 { 00:14:14.505 "name": "BaseBdev3", 00:14:14.505 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:14.505 "is_configured": true, 00:14:14.505 "data_offset": 2048, 00:14:14.505 "data_size": 63488 00:14:14.505 } 00:14:14.505 ] 00:14:14.505 }' 00:14:14.505 06:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.505 06:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.764 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:15.022 [2024-07-23 06:27:27.326995] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.022 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.281 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.281 "name": "Existed_Raid", 00:14:15.281 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:15.281 "strip_size_kb": 0, 00:14:15.281 "state": "configuring", 00:14:15.281 "raid_level": "raid1", 00:14:15.281 "superblock": true, 00:14:15.281 "num_base_bdevs": 3, 00:14:15.281 "num_base_bdevs_discovered": 1, 00:14:15.281 "num_base_bdevs_operational": 3, 00:14:15.281 "base_bdevs_list": [ 00:14:15.281 { 00:14:15.281 "name": "BaseBdev1", 00:14:15.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.281 "is_configured": false, 00:14:15.281 "data_offset": 0, 00:14:15.281 "data_size": 0 00:14:15.281 }, 00:14:15.281 { 00:14:15.281 "name": null, 00:14:15.281 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:15.281 "is_configured": false, 00:14:15.281 "data_offset": 2048, 00:14:15.281 "data_size": 63488 00:14:15.281 }, 00:14:15.281 { 00:14:15.281 "name": "BaseBdev3", 00:14:15.281 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:15.281 "is_configured": true, 00:14:15.281 "data_offset": 2048, 00:14:15.281 "data_size": 63488 00:14:15.281 } 00:14:15.281 ] 00:14:15.281 }' 00:14:15.281 06:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.281 06:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.540 06:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.540 06:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.834 06:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:15.834 06:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.093 [2024-07-23 06:27:28.511162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.093 BaseBdev1 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.093 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.353 06:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.611 [ 00:14:16.611 { 00:14:16.611 "name": "BaseBdev1", 00:14:16.611 "aliases": [ 00:14:16.611 "a242c258-48bc-11ef-a06c-59ddad71024c" 00:14:16.611 ], 00:14:16.611 "product_name": "Malloc disk", 00:14:16.611 "block_size": 512, 00:14:16.611 "num_blocks": 65536, 00:14:16.611 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:16.611 "assigned_rate_limits": { 00:14:16.611 "rw_ios_per_sec": 0, 00:14:16.611 "rw_mbytes_per_sec": 0, 00:14:16.611 "r_mbytes_per_sec": 0, 00:14:16.611 "w_mbytes_per_sec": 0 00:14:16.611 }, 00:14:16.611 "claimed": true, 00:14:16.611 "claim_type": "exclusive_write", 00:14:16.611 "zoned": false, 00:14:16.611 "supported_io_types": { 00:14:16.611 "read": true, 00:14:16.611 "write": true, 00:14:16.611 "unmap": true, 00:14:16.611 "flush": true, 00:14:16.611 "reset": true, 00:14:16.611 "nvme_admin": false, 00:14:16.611 "nvme_io": false, 00:14:16.611 "nvme_io_md": false, 00:14:16.611 "write_zeroes": true, 00:14:16.611 "zcopy": true, 00:14:16.611 "get_zone_info": false, 00:14:16.611 "zone_management": false, 00:14:16.611 "zone_append": false, 00:14:16.611 "compare": false, 00:14:16.611 "compare_and_write": false, 00:14:16.611 "abort": true, 00:14:16.611 "seek_hole": false, 00:14:16.611 "seek_data": false, 00:14:16.611 "copy": true, 00:14:16.611 "nvme_iov_md": false 00:14:16.611 }, 00:14:16.611 "memory_domains": [ 00:14:16.611 { 00:14:16.611 "dma_device_id": "system", 00:14:16.611 "dma_device_type": 1 00:14:16.611 }, 00:14:16.611 { 00:14:16.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.611 "dma_device_type": 2 00:14:16.611 } 00:14:16.611 ], 00:14:16.611 "driver_specific": {} 00:14:16.611 } 00:14:16.611 ] 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.611 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.872 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.872 "name": "Existed_Raid", 00:14:16.872 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:16.872 "strip_size_kb": 0, 00:14:16.872 "state": "configuring", 00:14:16.872 "raid_level": "raid1", 00:14:16.872 "superblock": true, 00:14:16.872 "num_base_bdevs": 3, 00:14:16.872 "num_base_bdevs_discovered": 2, 00:14:16.872 "num_base_bdevs_operational": 3, 00:14:16.872 "base_bdevs_list": [ 00:14:16.872 { 00:14:16.872 "name": "BaseBdev1", 00:14:16.872 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:16.872 "is_configured": true, 00:14:16.872 "data_offset": 2048, 00:14:16.872 "data_size": 63488 00:14:16.872 }, 00:14:16.872 { 00:14:16.872 "name": null, 00:14:16.872 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:16.872 "is_configured": false, 00:14:16.872 "data_offset": 2048, 00:14:16.872 "data_size": 63488 00:14:16.872 }, 00:14:16.872 { 00:14:16.872 "name": "BaseBdev3", 00:14:16.872 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:16.872 "is_configured": true, 00:14:16.872 "data_offset": 2048, 00:14:16.872 "data_size": 63488 00:14:16.872 } 00:14:16.872 ] 00:14:16.872 }' 00:14:16.872 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.872 06:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.129 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.129 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:17.387 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:17.387 06:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:17.645 [2024-07-23 06:27:30.099083] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.645 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.646 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.646 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.904 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.905 "name": "Existed_Raid", 00:14:17.905 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:17.905 "strip_size_kb": 0, 00:14:17.905 "state": "configuring", 00:14:17.905 "raid_level": "raid1", 00:14:17.905 "superblock": true, 00:14:17.905 "num_base_bdevs": 3, 00:14:17.905 "num_base_bdevs_discovered": 1, 00:14:17.905 "num_base_bdevs_operational": 3, 00:14:17.905 "base_bdevs_list": [ 00:14:17.905 { 00:14:17.905 "name": "BaseBdev1", 00:14:17.905 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:17.905 "is_configured": true, 00:14:17.905 "data_offset": 2048, 00:14:17.905 "data_size": 63488 00:14:17.905 }, 00:14:17.905 { 00:14:17.905 "name": null, 00:14:17.905 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:17.905 "is_configured": false, 00:14:17.905 "data_offset": 2048, 00:14:17.905 "data_size": 63488 00:14:17.905 }, 00:14:17.905 { 00:14:17.905 "name": null, 00:14:17.905 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:17.905 "is_configured": false, 00:14:17.905 "data_offset": 2048, 00:14:17.905 "data_size": 63488 00:14:17.905 } 00:14:17.905 ] 00:14:17.905 }' 00:14:17.905 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.905 06:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.471 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.471 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.471 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:18.471 06:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.729 [2024-07-23 06:27:31.239118] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.988 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.246 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.246 "name": "Existed_Raid", 00:14:19.246 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:19.246 "strip_size_kb": 0, 00:14:19.246 "state": "configuring", 00:14:19.246 "raid_level": "raid1", 00:14:19.246 "superblock": true, 00:14:19.246 "num_base_bdevs": 3, 00:14:19.246 "num_base_bdevs_discovered": 2, 00:14:19.246 "num_base_bdevs_operational": 3, 00:14:19.246 "base_bdevs_list": [ 00:14:19.246 { 00:14:19.246 "name": "BaseBdev1", 00:14:19.246 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:19.246 "is_configured": true, 00:14:19.246 "data_offset": 2048, 00:14:19.246 "data_size": 63488 00:14:19.246 }, 00:14:19.246 { 00:14:19.246 "name": null, 00:14:19.246 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:19.246 "is_configured": false, 00:14:19.246 "data_offset": 2048, 00:14:19.246 "data_size": 63488 00:14:19.246 }, 00:14:19.246 { 00:14:19.246 "name": "BaseBdev3", 00:14:19.246 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:19.246 "is_configured": true, 00:14:19.246 "data_offset": 2048, 00:14:19.246 "data_size": 63488 00:14:19.246 } 00:14:19.246 ] 00:14:19.246 }' 00:14:19.246 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.247 06:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.505 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.505 06:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.763 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:19.763 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:20.021 [2024-07-23 06:27:32.399152] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.021 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.022 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.280 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.280 "name": "Existed_Raid", 00:14:20.280 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:20.280 "strip_size_kb": 0, 00:14:20.280 "state": "configuring", 00:14:20.280 "raid_level": "raid1", 00:14:20.280 "superblock": true, 00:14:20.280 "num_base_bdevs": 3, 00:14:20.280 "num_base_bdevs_discovered": 1, 00:14:20.280 "num_base_bdevs_operational": 3, 00:14:20.280 "base_bdevs_list": [ 00:14:20.280 { 00:14:20.280 "name": null, 00:14:20.280 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:20.280 "is_configured": false, 00:14:20.280 "data_offset": 2048, 00:14:20.280 "data_size": 63488 00:14:20.280 }, 00:14:20.280 { 00:14:20.280 "name": null, 00:14:20.280 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:20.280 "is_configured": false, 00:14:20.280 "data_offset": 2048, 00:14:20.280 "data_size": 63488 00:14:20.280 }, 00:14:20.280 { 00:14:20.280 "name": "BaseBdev3", 00:14:20.280 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:20.280 "is_configured": true, 00:14:20.280 "data_offset": 2048, 00:14:20.280 "data_size": 63488 00:14:20.280 } 00:14:20.280 ] 00:14:20.280 }' 00:14:20.280 06:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.280 06:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.538 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.538 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.797 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:20.797 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:21.063 [2024-07-23 06:27:33.545994] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.063 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.629 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.629 "name": "Existed_Raid", 00:14:21.629 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:21.629 "strip_size_kb": 0, 00:14:21.629 "state": "configuring", 00:14:21.629 "raid_level": "raid1", 00:14:21.629 "superblock": true, 00:14:21.629 "num_base_bdevs": 3, 00:14:21.629 "num_base_bdevs_discovered": 2, 00:14:21.629 "num_base_bdevs_operational": 3, 00:14:21.629 "base_bdevs_list": [ 00:14:21.629 { 00:14:21.629 "name": null, 00:14:21.629 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:21.629 "is_configured": false, 00:14:21.629 "data_offset": 2048, 00:14:21.629 "data_size": 63488 00:14:21.629 }, 00:14:21.629 { 00:14:21.629 "name": "BaseBdev2", 00:14:21.629 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:21.629 "is_configured": true, 00:14:21.629 "data_offset": 2048, 00:14:21.629 "data_size": 63488 00:14:21.629 }, 00:14:21.629 { 00:14:21.629 "name": "BaseBdev3", 00:14:21.629 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:21.629 "is_configured": true, 00:14:21.629 "data_offset": 2048, 00:14:21.629 "data_size": 63488 00:14:21.629 } 00:14:21.629 ] 00:14:21.629 }' 00:14:21.629 06:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.629 06:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.887 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.887 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.145 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:22.145 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.145 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:22.403 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a242c258-48bc-11ef-a06c-59ddad71024c 00:14:22.661 [2024-07-23 06:27:34.966352] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:22.661 [2024-07-23 06:27:34.966455] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x226fe6434f00 00:14:22.661 [2024-07-23 06:27:34.966461] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.661 [2024-07-23 06:27:34.966481] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x226fe6497e20 00:14:22.661 [2024-07-23 06:27:34.966541] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x226fe6434f00 00:14:22.661 [2024-07-23 06:27:34.966545] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x226fe6434f00 00:14:22.661 [2024-07-23 06:27:34.966565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.661 NewBaseBdev 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:22.661 06:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.922 06:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:23.201 [ 00:14:23.201 { 00:14:23.201 "name": "NewBaseBdev", 00:14:23.201 "aliases": [ 00:14:23.201 "a242c258-48bc-11ef-a06c-59ddad71024c" 00:14:23.201 ], 00:14:23.201 "product_name": "Malloc disk", 00:14:23.201 "block_size": 512, 00:14:23.201 "num_blocks": 65536, 00:14:23.201 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:23.201 "assigned_rate_limits": { 00:14:23.201 "rw_ios_per_sec": 0, 00:14:23.201 "rw_mbytes_per_sec": 0, 00:14:23.201 "r_mbytes_per_sec": 0, 00:14:23.201 "w_mbytes_per_sec": 0 00:14:23.201 }, 00:14:23.201 "claimed": true, 00:14:23.201 "claim_type": "exclusive_write", 00:14:23.201 "zoned": false, 00:14:23.201 "supported_io_types": { 00:14:23.201 "read": true, 00:14:23.201 "write": true, 00:14:23.201 "unmap": true, 00:14:23.201 "flush": true, 00:14:23.201 "reset": true, 00:14:23.201 "nvme_admin": false, 00:14:23.201 "nvme_io": false, 00:14:23.201 "nvme_io_md": false, 00:14:23.201 "write_zeroes": true, 00:14:23.201 "zcopy": true, 00:14:23.201 "get_zone_info": false, 00:14:23.201 "zone_management": false, 00:14:23.201 "zone_append": false, 00:14:23.201 "compare": false, 00:14:23.201 "compare_and_write": false, 00:14:23.201 "abort": true, 00:14:23.201 "seek_hole": false, 00:14:23.201 "seek_data": false, 00:14:23.201 "copy": true, 00:14:23.201 "nvme_iov_md": false 00:14:23.201 }, 00:14:23.201 "memory_domains": [ 00:14:23.201 { 00:14:23.201 "dma_device_id": "system", 00:14:23.201 "dma_device_type": 1 00:14:23.201 }, 00:14:23.201 { 00:14:23.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.201 "dma_device_type": 2 00:14:23.201 } 00:14:23.201 ], 00:14:23.201 "driver_specific": {} 00:14:23.201 } 00:14:23.201 ] 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.201 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.460 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.460 "name": "Existed_Raid", 00:14:23.460 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:23.460 "strip_size_kb": 0, 00:14:23.460 "state": "online", 00:14:23.460 "raid_level": "raid1", 00:14:23.460 "superblock": true, 00:14:23.460 "num_base_bdevs": 3, 00:14:23.460 "num_base_bdevs_discovered": 3, 00:14:23.460 "num_base_bdevs_operational": 3, 00:14:23.460 "base_bdevs_list": [ 00:14:23.460 { 00:14:23.460 "name": "NewBaseBdev", 00:14:23.460 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:23.460 "is_configured": true, 00:14:23.460 "data_offset": 2048, 00:14:23.460 "data_size": 63488 00:14:23.460 }, 00:14:23.460 { 00:14:23.460 "name": "BaseBdev2", 00:14:23.460 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:23.460 "is_configured": true, 00:14:23.460 "data_offset": 2048, 00:14:23.460 "data_size": 63488 00:14:23.460 }, 00:14:23.460 { 00:14:23.460 "name": "BaseBdev3", 00:14:23.460 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:23.460 "is_configured": true, 00:14:23.460 "data_offset": 2048, 00:14:23.460 "data_size": 63488 00:14:23.460 } 00:14:23.460 ] 00:14:23.460 }' 00:14:23.460 06:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.460 06:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:23.719 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.978 [2024-07-23 06:27:36.418403] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.978 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.978 "name": "Existed_Raid", 00:14:23.978 "aliases": [ 00:14:23.978 "a10f21f3-48bc-11ef-a06c-59ddad71024c" 00:14:23.978 ], 00:14:23.978 "product_name": "Raid Volume", 00:14:23.978 "block_size": 512, 00:14:23.978 "num_blocks": 63488, 00:14:23.978 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:23.978 "assigned_rate_limits": { 00:14:23.978 "rw_ios_per_sec": 0, 00:14:23.978 "rw_mbytes_per_sec": 0, 00:14:23.978 "r_mbytes_per_sec": 0, 00:14:23.978 "w_mbytes_per_sec": 0 00:14:23.978 }, 00:14:23.978 "claimed": false, 00:14:23.978 "zoned": false, 00:14:23.978 "supported_io_types": { 00:14:23.978 "read": true, 00:14:23.978 "write": true, 00:14:23.978 "unmap": false, 00:14:23.978 "flush": false, 00:14:23.978 "reset": true, 00:14:23.978 "nvme_admin": false, 00:14:23.978 "nvme_io": false, 00:14:23.978 "nvme_io_md": false, 00:14:23.978 "write_zeroes": true, 00:14:23.978 "zcopy": false, 00:14:23.978 "get_zone_info": false, 00:14:23.978 "zone_management": false, 00:14:23.978 "zone_append": false, 00:14:23.978 "compare": false, 00:14:23.978 "compare_and_write": false, 00:14:23.978 "abort": false, 00:14:23.978 "seek_hole": false, 00:14:23.978 "seek_data": false, 00:14:23.978 "copy": false, 00:14:23.978 "nvme_iov_md": false 00:14:23.978 }, 00:14:23.978 "memory_domains": [ 00:14:23.978 { 00:14:23.978 "dma_device_id": "system", 00:14:23.978 "dma_device_type": 1 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.978 "dma_device_type": 2 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "dma_device_id": "system", 00:14:23.978 "dma_device_type": 1 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.978 "dma_device_type": 2 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "dma_device_id": "system", 00:14:23.978 "dma_device_type": 1 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.978 "dma_device_type": 2 00:14:23.978 } 00:14:23.978 ], 00:14:23.978 "driver_specific": { 00:14:23.978 "raid": { 00:14:23.978 "uuid": "a10f21f3-48bc-11ef-a06c-59ddad71024c", 00:14:23.978 "strip_size_kb": 0, 00:14:23.978 "state": "online", 00:14:23.978 "raid_level": "raid1", 00:14:23.978 "superblock": true, 00:14:23.978 "num_base_bdevs": 3, 00:14:23.978 "num_base_bdevs_discovered": 3, 00:14:23.978 "num_base_bdevs_operational": 3, 00:14:23.978 "base_bdevs_list": [ 00:14:23.978 { 00:14:23.978 "name": "NewBaseBdev", 00:14:23.978 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:23.978 "is_configured": true, 00:14:23.978 "data_offset": 2048, 00:14:23.978 "data_size": 63488 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "name": "BaseBdev2", 00:14:23.978 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:23.978 "is_configured": true, 00:14:23.978 "data_offset": 2048, 00:14:23.978 "data_size": 63488 00:14:23.978 }, 00:14:23.978 { 00:14:23.978 "name": "BaseBdev3", 00:14:23.978 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:23.978 "is_configured": true, 00:14:23.978 "data_offset": 2048, 00:14:23.978 "data_size": 63488 00:14:23.978 } 00:14:23.978 ] 00:14:23.978 } 00:14:23.978 } 00:14:23.978 }' 00:14:23.978 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.978 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:23.978 BaseBdev2 00:14:23.978 BaseBdev3' 00:14:23.978 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.978 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:23.978 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.237 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.237 "name": "NewBaseBdev", 00:14:24.237 "aliases": [ 00:14:24.237 "a242c258-48bc-11ef-a06c-59ddad71024c" 00:14:24.237 ], 00:14:24.237 "product_name": "Malloc disk", 00:14:24.237 "block_size": 512, 00:14:24.237 "num_blocks": 65536, 00:14:24.237 "uuid": "a242c258-48bc-11ef-a06c-59ddad71024c", 00:14:24.237 "assigned_rate_limits": { 00:14:24.237 "rw_ios_per_sec": 0, 00:14:24.237 "rw_mbytes_per_sec": 0, 00:14:24.237 "r_mbytes_per_sec": 0, 00:14:24.237 "w_mbytes_per_sec": 0 00:14:24.237 }, 00:14:24.237 "claimed": true, 00:14:24.237 "claim_type": "exclusive_write", 00:14:24.237 "zoned": false, 00:14:24.237 "supported_io_types": { 00:14:24.237 "read": true, 00:14:24.237 "write": true, 00:14:24.237 "unmap": true, 00:14:24.237 "flush": true, 00:14:24.237 "reset": true, 00:14:24.237 "nvme_admin": false, 00:14:24.237 "nvme_io": false, 00:14:24.237 "nvme_io_md": false, 00:14:24.237 "write_zeroes": true, 00:14:24.237 "zcopy": true, 00:14:24.237 "get_zone_info": false, 00:14:24.237 "zone_management": false, 00:14:24.237 "zone_append": false, 00:14:24.237 "compare": false, 00:14:24.237 "compare_and_write": false, 00:14:24.237 "abort": true, 00:14:24.237 "seek_hole": false, 00:14:24.237 "seek_data": false, 00:14:24.237 "copy": true, 00:14:24.237 "nvme_iov_md": false 00:14:24.237 }, 00:14:24.237 "memory_domains": [ 00:14:24.237 { 00:14:24.237 "dma_device_id": "system", 00:14:24.237 "dma_device_type": 1 00:14:24.237 }, 00:14:24.237 { 00:14:24.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.238 "dma_device_type": 2 00:14:24.238 } 00:14:24.238 ], 00:14:24.238 "driver_specific": {} 00:14:24.238 }' 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.238 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.496 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:24.496 06:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.755 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.755 "name": "BaseBdev2", 00:14:24.755 "aliases": [ 00:14:24.755 "a0077233-48bc-11ef-a06c-59ddad71024c" 00:14:24.755 ], 00:14:24.755 "product_name": "Malloc disk", 00:14:24.755 "block_size": 512, 00:14:24.755 "num_blocks": 65536, 00:14:24.755 "uuid": "a0077233-48bc-11ef-a06c-59ddad71024c", 00:14:24.755 "assigned_rate_limits": { 00:14:24.755 "rw_ios_per_sec": 0, 00:14:24.755 "rw_mbytes_per_sec": 0, 00:14:24.755 "r_mbytes_per_sec": 0, 00:14:24.755 "w_mbytes_per_sec": 0 00:14:24.755 }, 00:14:24.755 "claimed": true, 00:14:24.755 "claim_type": "exclusive_write", 00:14:24.755 "zoned": false, 00:14:24.755 "supported_io_types": { 00:14:24.755 "read": true, 00:14:24.755 "write": true, 00:14:24.755 "unmap": true, 00:14:24.755 "flush": true, 00:14:24.755 "reset": true, 00:14:24.755 "nvme_admin": false, 00:14:24.755 "nvme_io": false, 00:14:24.755 "nvme_io_md": false, 00:14:24.755 "write_zeroes": true, 00:14:24.755 "zcopy": true, 00:14:24.755 "get_zone_info": false, 00:14:24.755 "zone_management": false, 00:14:24.755 "zone_append": false, 00:14:24.755 "compare": false, 00:14:24.755 "compare_and_write": false, 00:14:24.755 "abort": true, 00:14:24.755 "seek_hole": false, 00:14:24.755 "seek_data": false, 00:14:24.755 "copy": true, 00:14:24.755 "nvme_iov_md": false 00:14:24.755 }, 00:14:24.755 "memory_domains": [ 00:14:24.755 { 00:14:24.755 "dma_device_id": "system", 00:14:24.755 "dma_device_type": 1 00:14:24.755 }, 00:14:24.755 { 00:14:24.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.755 "dma_device_type": 2 00:14:24.755 } 00:14:24.755 ], 00:14:24.755 "driver_specific": {} 00:14:24.755 }' 00:14:24.755 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:24.756 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:25.014 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:25.014 "name": "BaseBdev3", 00:14:25.014 "aliases": [ 00:14:25.014 "a08a11fa-48bc-11ef-a06c-59ddad71024c" 00:14:25.014 ], 00:14:25.014 "product_name": "Malloc disk", 00:14:25.014 "block_size": 512, 00:14:25.014 "num_blocks": 65536, 00:14:25.014 "uuid": "a08a11fa-48bc-11ef-a06c-59ddad71024c", 00:14:25.014 "assigned_rate_limits": { 00:14:25.014 "rw_ios_per_sec": 0, 00:14:25.014 "rw_mbytes_per_sec": 0, 00:14:25.014 "r_mbytes_per_sec": 0, 00:14:25.014 "w_mbytes_per_sec": 0 00:14:25.014 }, 00:14:25.014 "claimed": true, 00:14:25.014 "claim_type": "exclusive_write", 00:14:25.014 "zoned": false, 00:14:25.014 "supported_io_types": { 00:14:25.014 "read": true, 00:14:25.014 "write": true, 00:14:25.014 "unmap": true, 00:14:25.014 "flush": true, 00:14:25.014 "reset": true, 00:14:25.014 "nvme_admin": false, 00:14:25.014 "nvme_io": false, 00:14:25.014 "nvme_io_md": false, 00:14:25.014 "write_zeroes": true, 00:14:25.014 "zcopy": true, 00:14:25.014 "get_zone_info": false, 00:14:25.014 "zone_management": false, 00:14:25.014 "zone_append": false, 00:14:25.014 "compare": false, 00:14:25.014 "compare_and_write": false, 00:14:25.015 "abort": true, 00:14:25.015 "seek_hole": false, 00:14:25.015 "seek_data": false, 00:14:25.015 "copy": true, 00:14:25.015 "nvme_iov_md": false 00:14:25.015 }, 00:14:25.015 "memory_domains": [ 00:14:25.015 { 00:14:25.015 "dma_device_id": "system", 00:14:25.015 "dma_device_type": 1 00:14:25.015 }, 00:14:25.015 { 00:14:25.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.015 "dma_device_type": 2 00:14:25.015 } 00:14:25.015 ], 00:14:25.015 "driver_specific": {} 00:14:25.015 }' 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.015 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.274 [2024-07-23 06:27:37.694537] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.274 [2024-07-23 06:27:37.694561] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.274 [2024-07-23 06:27:37.694583] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.274 [2024-07-23 06:27:37.694647] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.274 [2024-07-23 06:27:37.694652] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x226fe6434f00 name Existed_Raid, state offline 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56877 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56877 ']' 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56877 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56877 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:25.274 killing process with pid 56877 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56877' 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56877 00:14:25.274 [2024-07-23 06:27:37.722544] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.274 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56877 00:14:25.274 [2024-07-23 06:27:37.741384] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.532 06:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:25.532 00:14:25.532 real 0m24.660s 00:14:25.532 user 0m44.753s 00:14:25.532 sys 0m3.749s 00:14:25.532 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.532 06:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.532 ************************************ 00:14:25.532 END TEST raid_state_function_test_sb 00:14:25.532 ************************************ 00:14:25.532 06:27:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:25.532 06:27:37 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:25.532 06:27:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:25.532 06:27:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.532 06:27:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.532 ************************************ 00:14:25.532 START TEST raid_superblock_test 00:14:25.532 ************************************ 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:25.532 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57609 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57609 /var/tmp/spdk-raid.sock 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57609 ']' 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:25.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.533 06:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.533 [2024-07-23 06:27:37.990087] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:25.533 [2024-07-23 06:27:37.990336] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:26.159 EAL: TSC is not safe to use in SMP mode 00:14:26.159 EAL: TSC is not invariant 00:14:26.159 [2024-07-23 06:27:38.561452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.159 [2024-07-23 06:27:38.651169] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:26.159 [2024-07-23 06:27:38.653414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.159 [2024-07-23 06:27:38.654230] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.159 [2024-07-23 06:27:38.654242] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.726 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:26.984 malloc1 00:14:26.984 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.242 [2024-07-23 06:27:39.623837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.242 [2024-07-23 06:27:39.623900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.242 [2024-07-23 06:27:39.623936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c234780 00:14:27.242 [2024-07-23 06:27:39.623959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.242 [2024-07-23 06:27:39.624971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.242 [2024-07-23 06:27:39.625009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.242 pt1 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.242 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.243 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.243 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:27.501 malloc2 00:14:27.501 06:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.759 [2024-07-23 06:27:40.175892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.759 [2024-07-23 06:27:40.175944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.759 [2024-07-23 06:27:40.175957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c234c80 00:14:27.759 [2024-07-23 06:27:40.175965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.759 [2024-07-23 06:27:40.176604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.759 [2024-07-23 06:27:40.176630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.759 pt2 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.759 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:28.016 malloc3 00:14:28.016 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:28.274 [2024-07-23 06:27:40.703907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:28.274 [2024-07-23 06:27:40.703984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.274 [2024-07-23 06:27:40.704028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c235180 00:14:28.274 [2024-07-23 06:27:40.704036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.274 [2024-07-23 06:27:40.704716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.274 [2024-07-23 06:27:40.704740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:28.274 pt3 00:14:28.274 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:28.274 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:28.274 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:28.532 [2024-07-23 06:27:40.923940] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.532 [2024-07-23 06:27:40.924708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.532 [2024-07-23 06:27:40.924734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.532 [2024-07-23 06:27:40.924790] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x288f9c235400 00:14:28.532 [2024-07-23 06:27:40.924799] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.532 [2024-07-23 06:27:40.924832] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x288f9c297e20 00:14:28.532 [2024-07-23 06:27:40.924929] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x288f9c235400 00:14:28.532 [2024-07-23 06:27:40.924936] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x288f9c235400 00:14:28.532 [2024-07-23 06:27:40.924976] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.532 06:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.791 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.791 "name": "raid_bdev1", 00:14:28.791 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:28.791 "strip_size_kb": 0, 00:14:28.791 "state": "online", 00:14:28.791 "raid_level": "raid1", 00:14:28.791 "superblock": true, 00:14:28.791 "num_base_bdevs": 3, 00:14:28.791 "num_base_bdevs_discovered": 3, 00:14:28.791 "num_base_bdevs_operational": 3, 00:14:28.791 "base_bdevs_list": [ 00:14:28.791 { 00:14:28.791 "name": "pt1", 00:14:28.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.791 "is_configured": true, 00:14:28.791 "data_offset": 2048, 00:14:28.791 "data_size": 63488 00:14:28.791 }, 00:14:28.791 { 00:14:28.791 "name": "pt2", 00:14:28.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.791 "is_configured": true, 00:14:28.791 "data_offset": 2048, 00:14:28.791 "data_size": 63488 00:14:28.791 }, 00:14:28.791 { 00:14:28.791 "name": "pt3", 00:14:28.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.791 "is_configured": true, 00:14:28.791 "data_offset": 2048, 00:14:28.791 "data_size": 63488 00:14:28.791 } 00:14:28.791 ] 00:14:28.791 }' 00:14:28.791 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.791 06:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:29.050 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.309 [2024-07-23 06:27:41.732080] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.309 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:29.309 "name": "raid_bdev1", 00:14:29.309 "aliases": [ 00:14:29.310 "a9a8d123-48bc-11ef-a06c-59ddad71024c" 00:14:29.310 ], 00:14:29.310 "product_name": "Raid Volume", 00:14:29.310 "block_size": 512, 00:14:29.310 "num_blocks": 63488, 00:14:29.310 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:29.310 "assigned_rate_limits": { 00:14:29.310 "rw_ios_per_sec": 0, 00:14:29.310 "rw_mbytes_per_sec": 0, 00:14:29.310 "r_mbytes_per_sec": 0, 00:14:29.310 "w_mbytes_per_sec": 0 00:14:29.310 }, 00:14:29.310 "claimed": false, 00:14:29.310 "zoned": false, 00:14:29.310 "supported_io_types": { 00:14:29.310 "read": true, 00:14:29.310 "write": true, 00:14:29.310 "unmap": false, 00:14:29.310 "flush": false, 00:14:29.310 "reset": true, 00:14:29.310 "nvme_admin": false, 00:14:29.310 "nvme_io": false, 00:14:29.310 "nvme_io_md": false, 00:14:29.310 "write_zeroes": true, 00:14:29.310 "zcopy": false, 00:14:29.310 "get_zone_info": false, 00:14:29.310 "zone_management": false, 00:14:29.310 "zone_append": false, 00:14:29.310 "compare": false, 00:14:29.310 "compare_and_write": false, 00:14:29.310 "abort": false, 00:14:29.310 "seek_hole": false, 00:14:29.310 "seek_data": false, 00:14:29.310 "copy": false, 00:14:29.310 "nvme_iov_md": false 00:14:29.310 }, 00:14:29.310 "memory_domains": [ 00:14:29.310 { 00:14:29.310 "dma_device_id": "system", 00:14:29.310 "dma_device_type": 1 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.310 "dma_device_type": 2 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "dma_device_id": "system", 00:14:29.310 "dma_device_type": 1 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.310 "dma_device_type": 2 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "dma_device_id": "system", 00:14:29.310 "dma_device_type": 1 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.310 "dma_device_type": 2 00:14:29.310 } 00:14:29.310 ], 00:14:29.310 "driver_specific": { 00:14:29.310 "raid": { 00:14:29.310 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:29.310 "strip_size_kb": 0, 00:14:29.310 "state": "online", 00:14:29.310 "raid_level": "raid1", 00:14:29.310 "superblock": true, 00:14:29.310 "num_base_bdevs": 3, 00:14:29.310 "num_base_bdevs_discovered": 3, 00:14:29.310 "num_base_bdevs_operational": 3, 00:14:29.310 "base_bdevs_list": [ 00:14:29.310 { 00:14:29.310 "name": "pt1", 00:14:29.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.310 "is_configured": true, 00:14:29.310 "data_offset": 2048, 00:14:29.310 "data_size": 63488 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "name": "pt2", 00:14:29.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.310 "is_configured": true, 00:14:29.310 "data_offset": 2048, 00:14:29.310 "data_size": 63488 00:14:29.310 }, 00:14:29.310 { 00:14:29.310 "name": "pt3", 00:14:29.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.310 "is_configured": true, 00:14:29.310 "data_offset": 2048, 00:14:29.310 "data_size": 63488 00:14:29.310 } 00:14:29.310 ] 00:14:29.310 } 00:14:29.310 } 00:14:29.310 }' 00:14:29.310 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.310 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:29.310 pt2 00:14:29.310 pt3' 00:14:29.310 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.310 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:29.310 06:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.569 "name": "pt1", 00:14:29.569 "aliases": [ 00:14:29.569 "00000000-0000-0000-0000-000000000001" 00:14:29.569 ], 00:14:29.569 "product_name": "passthru", 00:14:29.569 "block_size": 512, 00:14:29.569 "num_blocks": 65536, 00:14:29.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.569 "assigned_rate_limits": { 00:14:29.569 "rw_ios_per_sec": 0, 00:14:29.569 "rw_mbytes_per_sec": 0, 00:14:29.569 "r_mbytes_per_sec": 0, 00:14:29.569 "w_mbytes_per_sec": 0 00:14:29.569 }, 00:14:29.569 "claimed": true, 00:14:29.569 "claim_type": "exclusive_write", 00:14:29.569 "zoned": false, 00:14:29.569 "supported_io_types": { 00:14:29.569 "read": true, 00:14:29.569 "write": true, 00:14:29.569 "unmap": true, 00:14:29.569 "flush": true, 00:14:29.569 "reset": true, 00:14:29.569 "nvme_admin": false, 00:14:29.569 "nvme_io": false, 00:14:29.569 "nvme_io_md": false, 00:14:29.569 "write_zeroes": true, 00:14:29.569 "zcopy": true, 00:14:29.569 "get_zone_info": false, 00:14:29.569 "zone_management": false, 00:14:29.569 "zone_append": false, 00:14:29.569 "compare": false, 00:14:29.569 "compare_and_write": false, 00:14:29.569 "abort": true, 00:14:29.569 "seek_hole": false, 00:14:29.569 "seek_data": false, 00:14:29.569 "copy": true, 00:14:29.569 "nvme_iov_md": false 00:14:29.569 }, 00:14:29.569 "memory_domains": [ 00:14:29.569 { 00:14:29.569 "dma_device_id": "system", 00:14:29.569 "dma_device_type": 1 00:14:29.569 }, 00:14:29.569 { 00:14:29.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.569 "dma_device_type": 2 00:14:29.569 } 00:14:29.569 ], 00:14:29.569 "driver_specific": { 00:14:29.569 "passthru": { 00:14:29.569 "name": "pt1", 00:14:29.569 "base_bdev_name": "malloc1" 00:14:29.569 } 00:14:29.569 } 00:14:29.569 }' 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.569 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.828 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.828 "name": "pt2", 00:14:29.828 "aliases": [ 00:14:29.828 "00000000-0000-0000-0000-000000000002" 00:14:29.828 ], 00:14:29.828 "product_name": "passthru", 00:14:29.828 "block_size": 512, 00:14:29.828 "num_blocks": 65536, 00:14:29.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.828 "assigned_rate_limits": { 00:14:29.828 "rw_ios_per_sec": 0, 00:14:29.828 "rw_mbytes_per_sec": 0, 00:14:29.828 "r_mbytes_per_sec": 0, 00:14:29.828 "w_mbytes_per_sec": 0 00:14:29.828 }, 00:14:29.828 "claimed": true, 00:14:29.828 "claim_type": "exclusive_write", 00:14:29.828 "zoned": false, 00:14:29.828 "supported_io_types": { 00:14:29.828 "read": true, 00:14:29.828 "write": true, 00:14:29.828 "unmap": true, 00:14:29.828 "flush": true, 00:14:29.828 "reset": true, 00:14:29.828 "nvme_admin": false, 00:14:29.828 "nvme_io": false, 00:14:29.828 "nvme_io_md": false, 00:14:29.828 "write_zeroes": true, 00:14:29.828 "zcopy": true, 00:14:29.828 "get_zone_info": false, 00:14:29.828 "zone_management": false, 00:14:29.828 "zone_append": false, 00:14:29.828 "compare": false, 00:14:29.828 "compare_and_write": false, 00:14:29.828 "abort": true, 00:14:29.829 "seek_hole": false, 00:14:29.829 "seek_data": false, 00:14:29.829 "copy": true, 00:14:29.829 "nvme_iov_md": false 00:14:29.829 }, 00:14:29.829 "memory_domains": [ 00:14:29.829 { 00:14:29.829 "dma_device_id": "system", 00:14:29.829 "dma_device_type": 1 00:14:29.829 }, 00:14:29.829 { 00:14:29.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.829 "dma_device_type": 2 00:14:29.829 } 00:14:29.829 ], 00:14:29.829 "driver_specific": { 00:14:29.829 "passthru": { 00:14:29.829 "name": "pt2", 00:14:29.829 "base_bdev_name": "malloc2" 00:14:29.829 } 00:14:29.829 } 00:14:29.829 }' 00:14:29.829 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.829 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:30.089 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.352 "name": "pt3", 00:14:30.352 "aliases": [ 00:14:30.352 "00000000-0000-0000-0000-000000000003" 00:14:30.352 ], 00:14:30.352 "product_name": "passthru", 00:14:30.352 "block_size": 512, 00:14:30.352 "num_blocks": 65536, 00:14:30.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.352 "assigned_rate_limits": { 00:14:30.352 "rw_ios_per_sec": 0, 00:14:30.352 "rw_mbytes_per_sec": 0, 00:14:30.352 "r_mbytes_per_sec": 0, 00:14:30.352 "w_mbytes_per_sec": 0 00:14:30.352 }, 00:14:30.352 "claimed": true, 00:14:30.352 "claim_type": "exclusive_write", 00:14:30.352 "zoned": false, 00:14:30.352 "supported_io_types": { 00:14:30.352 "read": true, 00:14:30.352 "write": true, 00:14:30.352 "unmap": true, 00:14:30.352 "flush": true, 00:14:30.352 "reset": true, 00:14:30.352 "nvme_admin": false, 00:14:30.352 "nvme_io": false, 00:14:30.352 "nvme_io_md": false, 00:14:30.352 "write_zeroes": true, 00:14:30.352 "zcopy": true, 00:14:30.352 "get_zone_info": false, 00:14:30.352 "zone_management": false, 00:14:30.352 "zone_append": false, 00:14:30.352 "compare": false, 00:14:30.352 "compare_and_write": false, 00:14:30.352 "abort": true, 00:14:30.352 "seek_hole": false, 00:14:30.352 "seek_data": false, 00:14:30.352 "copy": true, 00:14:30.352 "nvme_iov_md": false 00:14:30.352 }, 00:14:30.352 "memory_domains": [ 00:14:30.352 { 00:14:30.352 "dma_device_id": "system", 00:14:30.352 "dma_device_type": 1 00:14:30.352 }, 00:14:30.352 { 00:14:30.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.352 "dma_device_type": 2 00:14:30.352 } 00:14:30.352 ], 00:14:30.352 "driver_specific": { 00:14:30.352 "passthru": { 00:14:30.352 "name": "pt3", 00:14:30.352 "base_bdev_name": "malloc3" 00:14:30.352 } 00:14:30.352 } 00:14:30.352 }' 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.352 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:30.617 [2024-07-23 06:27:42.940207] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.617 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a9a8d123-48bc-11ef-a06c-59ddad71024c 00:14:30.617 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a9a8d123-48bc-11ef-a06c-59ddad71024c ']' 00:14:30.617 06:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:30.886 [2024-07-23 06:27:43.200182] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.886 [2024-07-23 06:27:43.200203] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.886 [2024-07-23 06:27:43.200223] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.886 [2024-07-23 06:27:43.200239] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.886 [2024-07-23 06:27:43.200250] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x288f9c235400 name raid_bdev1, state offline 00:14:30.886 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.886 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:31.157 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:31.157 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:31.157 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.157 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:31.157 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.157 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:31.430 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.430 06:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:32.017 06:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:32.017 06:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:32.017 06:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:32.017 06:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:32.017 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:32.017 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:32.018 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:32.275 [2024-07-23 06:27:44.740386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:32.275 [2024-07-23 06:27:44.741045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:32.275 [2024-07-23 06:27:44.741065] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:32.275 [2024-07-23 06:27:44.741079] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:32.276 [2024-07-23 06:27:44.741116] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:32.276 [2024-07-23 06:27:44.741129] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:32.276 [2024-07-23 06:27:44.741137] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.276 [2024-07-23 06:27:44.741142] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x288f9c235180 name raid_bdev1, state configuring 00:14:32.276 request: 00:14:32.276 { 00:14:32.276 "name": "raid_bdev1", 00:14:32.276 "raid_level": "raid1", 00:14:32.276 "base_bdevs": [ 00:14:32.276 "malloc1", 00:14:32.276 "malloc2", 00:14:32.276 "malloc3" 00:14:32.276 ], 00:14:32.276 "superblock": false, 00:14:32.276 "method": "bdev_raid_create", 00:14:32.276 "req_id": 1 00:14:32.276 } 00:14:32.276 Got JSON-RPC error response 00:14:32.276 response: 00:14:32.276 { 00:14:32.276 "code": -17, 00:14:32.276 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:32.276 } 00:14:32.276 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:32.276 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:32.276 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:32.276 06:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:32.276 06:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.276 06:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:32.534 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:32.534 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:32.534 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:32.791 [2024-07-23 06:27:45.272392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:32.791 [2024-07-23 06:27:45.272454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.791 [2024-07-23 06:27:45.272482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c234c80 00:14:32.791 [2024-07-23 06:27:45.272489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.791 [2024-07-23 06:27:45.273185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.792 [2024-07-23 06:27:45.273210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:32.792 [2024-07-23 06:27:45.273235] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:32.792 [2024-07-23 06:27:45.273247] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:32.792 pt1 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.792 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.050 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.050 "name": "raid_bdev1", 00:14:33.050 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:33.050 "strip_size_kb": 0, 00:14:33.050 "state": "configuring", 00:14:33.050 "raid_level": "raid1", 00:14:33.050 "superblock": true, 00:14:33.050 "num_base_bdevs": 3, 00:14:33.050 "num_base_bdevs_discovered": 1, 00:14:33.050 "num_base_bdevs_operational": 3, 00:14:33.050 "base_bdevs_list": [ 00:14:33.050 { 00:14:33.050 "name": "pt1", 00:14:33.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.050 "is_configured": true, 00:14:33.050 "data_offset": 2048, 00:14:33.050 "data_size": 63488 00:14:33.050 }, 00:14:33.050 { 00:14:33.050 "name": null, 00:14:33.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.050 "is_configured": false, 00:14:33.050 "data_offset": 2048, 00:14:33.050 "data_size": 63488 00:14:33.050 }, 00:14:33.050 { 00:14:33.050 "name": null, 00:14:33.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.050 "is_configured": false, 00:14:33.050 "data_offset": 2048, 00:14:33.050 "data_size": 63488 00:14:33.050 } 00:14:33.050 ] 00:14:33.050 }' 00:14:33.050 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.050 06:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.308 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:33.308 06:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.569 [2024-07-23 06:27:46.020439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.569 [2024-07-23 06:27:46.020506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.569 [2024-07-23 06:27:46.020533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c235680 00:14:33.569 [2024-07-23 06:27:46.020541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.569 [2024-07-23 06:27:46.020653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.569 [2024-07-23 06:27:46.020663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.569 [2024-07-23 06:27:46.020701] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:33.569 [2024-07-23 06:27:46.020710] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.569 pt2 00:14:33.569 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:33.828 [2024-07-23 06:27:46.292450] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.828 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.091 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.091 "name": "raid_bdev1", 00:14:34.091 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:34.091 "strip_size_kb": 0, 00:14:34.091 "state": "configuring", 00:14:34.091 "raid_level": "raid1", 00:14:34.091 "superblock": true, 00:14:34.091 "num_base_bdevs": 3, 00:14:34.091 "num_base_bdevs_discovered": 1, 00:14:34.091 "num_base_bdevs_operational": 3, 00:14:34.091 "base_bdevs_list": [ 00:14:34.091 { 00:14:34.091 "name": "pt1", 00:14:34.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.091 "is_configured": true, 00:14:34.091 "data_offset": 2048, 00:14:34.091 "data_size": 63488 00:14:34.091 }, 00:14:34.091 { 00:14:34.091 "name": null, 00:14:34.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.091 "is_configured": false, 00:14:34.091 "data_offset": 2048, 00:14:34.091 "data_size": 63488 00:14:34.091 }, 00:14:34.091 { 00:14:34.091 "name": null, 00:14:34.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.091 "is_configured": false, 00:14:34.091 "data_offset": 2048, 00:14:34.091 "data_size": 63488 00:14:34.091 } 00:14:34.091 ] 00:14:34.091 }' 00:14:34.091 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.091 06:27:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.658 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:34.658 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:34.658 06:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.658 [2024-07-23 06:27:47.104521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.658 [2024-07-23 06:27:47.104586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.658 [2024-07-23 06:27:47.104613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c235680 00:14:34.658 [2024-07-23 06:27:47.104620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.658 [2024-07-23 06:27:47.104734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.658 [2024-07-23 06:27:47.104744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.658 [2024-07-23 06:27:47.104783] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:34.658 [2024-07-23 06:27:47.104792] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.658 pt2 00:14:34.658 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:34.658 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:34.658 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:34.917 [2024-07-23 06:27:47.364547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:34.917 [2024-07-23 06:27:47.364639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.917 [2024-07-23 06:27:47.364665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c235400 00:14:34.917 [2024-07-23 06:27:47.364673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.917 [2024-07-23 06:27:47.364800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.917 [2024-07-23 06:27:47.364825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:34.917 [2024-07-23 06:27:47.364862] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:34.917 [2024-07-23 06:27:47.364870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:34.917 [2024-07-23 06:27:47.364897] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x288f9c234780 00:14:34.917 [2024-07-23 06:27:47.364902] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:34.917 [2024-07-23 06:27:47.364947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x288f9c297e20 00:14:34.917 [2024-07-23 06:27:47.365029] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x288f9c234780 00:14:34.917 [2024-07-23 06:27:47.365034] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x288f9c234780 00:14:34.917 [2024-07-23 06:27:47.365070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.917 pt3 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.917 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.176 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.176 "name": "raid_bdev1", 00:14:35.176 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:35.176 "strip_size_kb": 0, 00:14:35.176 "state": "online", 00:14:35.176 "raid_level": "raid1", 00:14:35.176 "superblock": true, 00:14:35.176 "num_base_bdevs": 3, 00:14:35.176 "num_base_bdevs_discovered": 3, 00:14:35.176 "num_base_bdevs_operational": 3, 00:14:35.176 "base_bdevs_list": [ 00:14:35.176 { 00:14:35.176 "name": "pt1", 00:14:35.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.176 "is_configured": true, 00:14:35.176 "data_offset": 2048, 00:14:35.176 "data_size": 63488 00:14:35.176 }, 00:14:35.176 { 00:14:35.176 "name": "pt2", 00:14:35.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.176 "is_configured": true, 00:14:35.176 "data_offset": 2048, 00:14:35.176 "data_size": 63488 00:14:35.176 }, 00:14:35.176 { 00:14:35.176 "name": "pt3", 00:14:35.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.176 "is_configured": true, 00:14:35.176 "data_offset": 2048, 00:14:35.176 "data_size": 63488 00:14:35.176 } 00:14:35.176 ] 00:14:35.176 }' 00:14:35.176 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.176 06:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.434 06:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:35.705 [2024-07-23 06:27:48.184638] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.705 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:35.705 "name": "raid_bdev1", 00:14:35.705 "aliases": [ 00:14:35.705 "a9a8d123-48bc-11ef-a06c-59ddad71024c" 00:14:35.705 ], 00:14:35.705 "product_name": "Raid Volume", 00:14:35.705 "block_size": 512, 00:14:35.705 "num_blocks": 63488, 00:14:35.705 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:35.705 "assigned_rate_limits": { 00:14:35.705 "rw_ios_per_sec": 0, 00:14:35.705 "rw_mbytes_per_sec": 0, 00:14:35.705 "r_mbytes_per_sec": 0, 00:14:35.705 "w_mbytes_per_sec": 0 00:14:35.705 }, 00:14:35.705 "claimed": false, 00:14:35.705 "zoned": false, 00:14:35.705 "supported_io_types": { 00:14:35.705 "read": true, 00:14:35.705 "write": true, 00:14:35.705 "unmap": false, 00:14:35.705 "flush": false, 00:14:35.705 "reset": true, 00:14:35.705 "nvme_admin": false, 00:14:35.705 "nvme_io": false, 00:14:35.705 "nvme_io_md": false, 00:14:35.705 "write_zeroes": true, 00:14:35.705 "zcopy": false, 00:14:35.705 "get_zone_info": false, 00:14:35.705 "zone_management": false, 00:14:35.705 "zone_append": false, 00:14:35.705 "compare": false, 00:14:35.706 "compare_and_write": false, 00:14:35.706 "abort": false, 00:14:35.706 "seek_hole": false, 00:14:35.706 "seek_data": false, 00:14:35.706 "copy": false, 00:14:35.706 "nvme_iov_md": false 00:14:35.706 }, 00:14:35.706 "memory_domains": [ 00:14:35.706 { 00:14:35.706 "dma_device_id": "system", 00:14:35.706 "dma_device_type": 1 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.706 "dma_device_type": 2 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "dma_device_id": "system", 00:14:35.706 "dma_device_type": 1 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.706 "dma_device_type": 2 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "dma_device_id": "system", 00:14:35.706 "dma_device_type": 1 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.706 "dma_device_type": 2 00:14:35.706 } 00:14:35.706 ], 00:14:35.706 "driver_specific": { 00:14:35.706 "raid": { 00:14:35.706 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:35.706 "strip_size_kb": 0, 00:14:35.706 "state": "online", 00:14:35.706 "raid_level": "raid1", 00:14:35.706 "superblock": true, 00:14:35.706 "num_base_bdevs": 3, 00:14:35.706 "num_base_bdevs_discovered": 3, 00:14:35.706 "num_base_bdevs_operational": 3, 00:14:35.706 "base_bdevs_list": [ 00:14:35.706 { 00:14:35.706 "name": "pt1", 00:14:35.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "pt2", 00:14:35.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "pt3", 00:14:35.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 } 00:14:35.706 ] 00:14:35.706 } 00:14:35.706 } 00:14:35.706 }' 00:14:35.706 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.706 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:35.706 pt2 00:14:35.706 pt3' 00:14:35.706 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.706 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.706 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.016 "name": "pt1", 00:14:36.016 "aliases": [ 00:14:36.016 "00000000-0000-0000-0000-000000000001" 00:14:36.016 ], 00:14:36.016 "product_name": "passthru", 00:14:36.016 "block_size": 512, 00:14:36.016 "num_blocks": 65536, 00:14:36.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.016 "assigned_rate_limits": { 00:14:36.016 "rw_ios_per_sec": 0, 00:14:36.016 "rw_mbytes_per_sec": 0, 00:14:36.016 "r_mbytes_per_sec": 0, 00:14:36.016 "w_mbytes_per_sec": 0 00:14:36.016 }, 00:14:36.016 "claimed": true, 00:14:36.016 "claim_type": "exclusive_write", 00:14:36.016 "zoned": false, 00:14:36.016 "supported_io_types": { 00:14:36.016 "read": true, 00:14:36.016 "write": true, 00:14:36.016 "unmap": true, 00:14:36.016 "flush": true, 00:14:36.016 "reset": true, 00:14:36.016 "nvme_admin": false, 00:14:36.016 "nvme_io": false, 00:14:36.016 "nvme_io_md": false, 00:14:36.016 "write_zeroes": true, 00:14:36.016 "zcopy": true, 00:14:36.016 "get_zone_info": false, 00:14:36.016 "zone_management": false, 00:14:36.016 "zone_append": false, 00:14:36.016 "compare": false, 00:14:36.016 "compare_and_write": false, 00:14:36.016 "abort": true, 00:14:36.016 "seek_hole": false, 00:14:36.016 "seek_data": false, 00:14:36.016 "copy": true, 00:14:36.016 "nvme_iov_md": false 00:14:36.016 }, 00:14:36.016 "memory_domains": [ 00:14:36.016 { 00:14:36.016 "dma_device_id": "system", 00:14:36.016 "dma_device_type": 1 00:14:36.016 }, 00:14:36.016 { 00:14:36.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.016 "dma_device_type": 2 00:14:36.016 } 00:14:36.016 ], 00:14:36.016 "driver_specific": { 00:14:36.016 "passthru": { 00:14:36.016 "name": "pt1", 00:14:36.016 "base_bdev_name": "malloc1" 00:14:36.016 } 00:14:36.016 } 00:14:36.016 }' 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.016 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:36.275 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.533 "name": "pt2", 00:14:36.533 "aliases": [ 00:14:36.533 "00000000-0000-0000-0000-000000000002" 00:14:36.533 ], 00:14:36.533 "product_name": "passthru", 00:14:36.533 "block_size": 512, 00:14:36.533 "num_blocks": 65536, 00:14:36.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.533 "assigned_rate_limits": { 00:14:36.533 "rw_ios_per_sec": 0, 00:14:36.533 "rw_mbytes_per_sec": 0, 00:14:36.533 "r_mbytes_per_sec": 0, 00:14:36.533 "w_mbytes_per_sec": 0 00:14:36.533 }, 00:14:36.533 "claimed": true, 00:14:36.533 "claim_type": "exclusive_write", 00:14:36.533 "zoned": false, 00:14:36.533 "supported_io_types": { 00:14:36.533 "read": true, 00:14:36.533 "write": true, 00:14:36.533 "unmap": true, 00:14:36.533 "flush": true, 00:14:36.533 "reset": true, 00:14:36.533 "nvme_admin": false, 00:14:36.533 "nvme_io": false, 00:14:36.533 "nvme_io_md": false, 00:14:36.533 "write_zeroes": true, 00:14:36.533 "zcopy": true, 00:14:36.533 "get_zone_info": false, 00:14:36.533 "zone_management": false, 00:14:36.533 "zone_append": false, 00:14:36.533 "compare": false, 00:14:36.533 "compare_and_write": false, 00:14:36.533 "abort": true, 00:14:36.533 "seek_hole": false, 00:14:36.533 "seek_data": false, 00:14:36.533 "copy": true, 00:14:36.533 "nvme_iov_md": false 00:14:36.533 }, 00:14:36.533 "memory_domains": [ 00:14:36.533 { 00:14:36.533 "dma_device_id": "system", 00:14:36.533 "dma_device_type": 1 00:14:36.533 }, 00:14:36.533 { 00:14:36.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.533 "dma_device_type": 2 00:14:36.533 } 00:14:36.533 ], 00:14:36.533 "driver_specific": { 00:14:36.533 "passthru": { 00:14:36.533 "name": "pt2", 00:14:36.533 "base_bdev_name": "malloc2" 00:14:36.533 } 00:14:36.533 } 00:14:36.533 }' 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:36.533 06:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.792 "name": "pt3", 00:14:36.792 "aliases": [ 00:14:36.792 "00000000-0000-0000-0000-000000000003" 00:14:36.792 ], 00:14:36.792 "product_name": "passthru", 00:14:36.792 "block_size": 512, 00:14:36.792 "num_blocks": 65536, 00:14:36.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.792 "assigned_rate_limits": { 00:14:36.792 "rw_ios_per_sec": 0, 00:14:36.792 "rw_mbytes_per_sec": 0, 00:14:36.792 "r_mbytes_per_sec": 0, 00:14:36.792 "w_mbytes_per_sec": 0 00:14:36.792 }, 00:14:36.792 "claimed": true, 00:14:36.792 "claim_type": "exclusive_write", 00:14:36.792 "zoned": false, 00:14:36.792 "supported_io_types": { 00:14:36.792 "read": true, 00:14:36.792 "write": true, 00:14:36.792 "unmap": true, 00:14:36.792 "flush": true, 00:14:36.792 "reset": true, 00:14:36.792 "nvme_admin": false, 00:14:36.792 "nvme_io": false, 00:14:36.792 "nvme_io_md": false, 00:14:36.792 "write_zeroes": true, 00:14:36.792 "zcopy": true, 00:14:36.792 "get_zone_info": false, 00:14:36.792 "zone_management": false, 00:14:36.792 "zone_append": false, 00:14:36.792 "compare": false, 00:14:36.792 "compare_and_write": false, 00:14:36.792 "abort": true, 00:14:36.792 "seek_hole": false, 00:14:36.792 "seek_data": false, 00:14:36.792 "copy": true, 00:14:36.792 "nvme_iov_md": false 00:14:36.792 }, 00:14:36.792 "memory_domains": [ 00:14:36.792 { 00:14:36.792 "dma_device_id": "system", 00:14:36.792 "dma_device_type": 1 00:14:36.792 }, 00:14:36.792 { 00:14:36.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.792 "dma_device_type": 2 00:14:36.792 } 00:14:36.792 ], 00:14:36.792 "driver_specific": { 00:14:36.792 "passthru": { 00:14:36.792 "name": "pt3", 00:14:36.792 "base_bdev_name": "malloc3" 00:14:36.792 } 00:14:36.792 } 00:14:36.792 }' 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:36.792 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:37.050 [2024-07-23 06:27:49.520809] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.050 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a9a8d123-48bc-11ef-a06c-59ddad71024c '!=' a9a8d123-48bc-11ef-a06c-59ddad71024c ']' 00:14:37.050 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:37.050 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:37.050 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:37.050 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:37.309 [2024-07-23 06:27:49.780822] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.309 06:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.568 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.568 "name": "raid_bdev1", 00:14:37.568 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:37.568 "strip_size_kb": 0, 00:14:37.568 "state": "online", 00:14:37.568 "raid_level": "raid1", 00:14:37.568 "superblock": true, 00:14:37.568 "num_base_bdevs": 3, 00:14:37.568 "num_base_bdevs_discovered": 2, 00:14:37.568 "num_base_bdevs_operational": 2, 00:14:37.568 "base_bdevs_list": [ 00:14:37.568 { 00:14:37.568 "name": null, 00:14:37.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.568 "is_configured": false, 00:14:37.568 "data_offset": 2048, 00:14:37.568 "data_size": 63488 00:14:37.568 }, 00:14:37.568 { 00:14:37.568 "name": "pt2", 00:14:37.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.568 "is_configured": true, 00:14:37.568 "data_offset": 2048, 00:14:37.568 "data_size": 63488 00:14:37.568 }, 00:14:37.568 { 00:14:37.568 "name": "pt3", 00:14:37.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.568 "is_configured": true, 00:14:37.568 "data_offset": 2048, 00:14:37.568 "data_size": 63488 00:14:37.568 } 00:14:37.568 ] 00:14:37.568 }' 00:14:37.568 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.568 06:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.894 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:38.153 [2024-07-23 06:27:50.536885] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.153 [2024-07-23 06:27:50.536908] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.153 [2024-07-23 06:27:50.536936] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.153 [2024-07-23 06:27:50.536950] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.153 [2024-07-23 06:27:50.536955] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x288f9c234780 name raid_bdev1, state offline 00:14:38.153 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.153 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:38.410 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:38.410 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:38.410 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:38.410 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:38.410 06:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:38.668 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:38.668 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:38.668 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:38.926 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:38.926 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:38.926 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:38.926 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:38.926 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.184 [2024-07-23 06:27:51.501011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.184 [2024-07-23 06:27:51.501112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.184 [2024-07-23 06:27:51.501139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c235400 00:14:39.184 [2024-07-23 06:27:51.501148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.185 [2024-07-23 06:27:51.501857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.185 [2024-07-23 06:27:51.501916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.185 [2024-07-23 06:27:51.501940] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:39.185 [2024-07-23 06:27:51.501952] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.185 pt2 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.185 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.444 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.444 "name": "raid_bdev1", 00:14:39.444 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:39.444 "strip_size_kb": 0, 00:14:39.444 "state": "configuring", 00:14:39.444 "raid_level": "raid1", 00:14:39.444 "superblock": true, 00:14:39.444 "num_base_bdevs": 3, 00:14:39.444 "num_base_bdevs_discovered": 1, 00:14:39.444 "num_base_bdevs_operational": 2, 00:14:39.444 "base_bdevs_list": [ 00:14:39.444 { 00:14:39.444 "name": null, 00:14:39.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.444 "is_configured": false, 00:14:39.444 "data_offset": 2048, 00:14:39.444 "data_size": 63488 00:14:39.444 }, 00:14:39.444 { 00:14:39.444 "name": "pt2", 00:14:39.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.444 "is_configured": true, 00:14:39.444 "data_offset": 2048, 00:14:39.444 "data_size": 63488 00:14:39.444 }, 00:14:39.444 { 00:14:39.444 "name": null, 00:14:39.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.444 "is_configured": false, 00:14:39.444 "data_offset": 2048, 00:14:39.444 "data_size": 63488 00:14:39.444 } 00:14:39.444 ] 00:14:39.444 }' 00:14:39.444 06:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.444 06:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.702 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:39.702 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:39.702 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:14:39.702 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.961 [2024-07-23 06:27:52.329061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.961 [2024-07-23 06:27:52.329132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.961 [2024-07-23 06:27:52.329160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c234780 00:14:39.961 [2024-07-23 06:27:52.329167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.961 [2024-07-23 06:27:52.329282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.961 [2024-07-23 06:27:52.329308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.961 [2024-07-23 06:27:52.329347] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:39.961 [2024-07-23 06:27:52.329356] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.961 [2024-07-23 06:27:52.329383] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x288f9c235180 00:14:39.961 [2024-07-23 06:27:52.329387] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.961 [2024-07-23 06:27:52.329407] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x288f9c297e20 00:14:39.961 [2024-07-23 06:27:52.329470] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x288f9c235180 00:14:39.961 [2024-07-23 06:27:52.329474] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x288f9c235180 00:14:39.961 [2024-07-23 06:27:52.329495] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.961 pt3 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.961 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.219 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.219 "name": "raid_bdev1", 00:14:40.219 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:40.219 "strip_size_kb": 0, 00:14:40.219 "state": "online", 00:14:40.219 "raid_level": "raid1", 00:14:40.219 "superblock": true, 00:14:40.219 "num_base_bdevs": 3, 00:14:40.219 "num_base_bdevs_discovered": 2, 00:14:40.219 "num_base_bdevs_operational": 2, 00:14:40.219 "base_bdevs_list": [ 00:14:40.219 { 00:14:40.219 "name": null, 00:14:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.219 "is_configured": false, 00:14:40.219 "data_offset": 2048, 00:14:40.219 "data_size": 63488 00:14:40.219 }, 00:14:40.219 { 00:14:40.219 "name": "pt2", 00:14:40.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.219 "is_configured": true, 00:14:40.219 "data_offset": 2048, 00:14:40.219 "data_size": 63488 00:14:40.219 }, 00:14:40.219 { 00:14:40.219 "name": "pt3", 00:14:40.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.219 "is_configured": true, 00:14:40.219 "data_offset": 2048, 00:14:40.219 "data_size": 63488 00:14:40.219 } 00:14:40.219 ] 00:14:40.219 }' 00:14:40.219 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.219 06:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.478 06:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:40.737 [2024-07-23 06:27:53.165105] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.737 [2024-07-23 06:27:53.165127] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.737 [2024-07-23 06:27:53.165166] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.737 [2024-07-23 06:27:53.165180] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.737 [2024-07-23 06:27:53.165184] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x288f9c235180 name raid_bdev1, state offline 00:14:40.737 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.737 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:41.040 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:41.040 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:41.040 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:14:41.040 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:14:41.040 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:41.299 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.558 [2024-07-23 06:27:53.913168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.558 [2024-07-23 06:27:53.913229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.558 [2024-07-23 06:27:53.913257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c234780 00:14:41.558 [2024-07-23 06:27:53.913264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.558 [2024-07-23 06:27:53.913917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.558 [2024-07-23 06:27:53.913941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.558 [2024-07-23 06:27:53.913966] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.558 [2024-07-23 06:27:53.913978] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.558 [2024-07-23 06:27:53.914007] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:41.558 [2024-07-23 06:27:53.914011] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.558 [2024-07-23 06:27:53.914016] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x288f9c235180 name raid_bdev1, state configuring 00:14:41.558 [2024-07-23 06:27:53.914025] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.558 pt1 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.558 06:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.817 06:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.817 "name": "raid_bdev1", 00:14:41.817 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:41.817 "strip_size_kb": 0, 00:14:41.817 "state": "configuring", 00:14:41.817 "raid_level": "raid1", 00:14:41.817 "superblock": true, 00:14:41.817 "num_base_bdevs": 3, 00:14:41.817 "num_base_bdevs_discovered": 1, 00:14:41.817 "num_base_bdevs_operational": 2, 00:14:41.817 "base_bdevs_list": [ 00:14:41.817 { 00:14:41.817 "name": null, 00:14:41.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.817 "is_configured": false, 00:14:41.817 "data_offset": 2048, 00:14:41.817 "data_size": 63488 00:14:41.817 }, 00:14:41.817 { 00:14:41.817 "name": "pt2", 00:14:41.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.817 "is_configured": true, 00:14:41.817 "data_offset": 2048, 00:14:41.817 "data_size": 63488 00:14:41.817 }, 00:14:41.817 { 00:14:41.817 "name": null, 00:14:41.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.817 "is_configured": false, 00:14:41.817 "data_offset": 2048, 00:14:41.817 "data_size": 63488 00:14:41.817 } 00:14:41.817 ] 00:14:41.817 }' 00:14:41.817 06:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.817 06:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.384 06:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.384 06:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:42.384 06:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:14:42.384 06:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.642 [2024-07-23 06:27:55.081262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.642 [2024-07-23 06:27:55.081334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.642 [2024-07-23 06:27:55.081361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x288f9c234c80 00:14:42.642 [2024-07-23 06:27:55.081369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.642 [2024-07-23 06:27:55.081484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.642 [2024-07-23 06:27:55.081495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.642 [2024-07-23 06:27:55.081533] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.642 [2024-07-23 06:27:55.081542] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.642 [2024-07-23 06:27:55.081569] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x288f9c235180 00:14:42.642 [2024-07-23 06:27:55.081574] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:42.642 [2024-07-23 06:27:55.081594] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x288f9c297e20 00:14:42.642 [2024-07-23 06:27:55.081655] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x288f9c235180 00:14:42.642 [2024-07-23 06:27:55.081660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x288f9c235180 00:14:42.642 [2024-07-23 06:27:55.081687] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.642 pt3 00:14:42.642 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.642 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.642 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.642 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.643 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.901 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.901 "name": "raid_bdev1", 00:14:42.901 "uuid": "a9a8d123-48bc-11ef-a06c-59ddad71024c", 00:14:42.901 "strip_size_kb": 0, 00:14:42.901 "state": "online", 00:14:42.901 "raid_level": "raid1", 00:14:42.901 "superblock": true, 00:14:42.901 "num_base_bdevs": 3, 00:14:42.901 "num_base_bdevs_discovered": 2, 00:14:42.901 "num_base_bdevs_operational": 2, 00:14:42.901 "base_bdevs_list": [ 00:14:42.901 { 00:14:42.901 "name": null, 00:14:42.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.901 "is_configured": false, 00:14:42.901 "data_offset": 2048, 00:14:42.901 "data_size": 63488 00:14:42.901 }, 00:14:42.901 { 00:14:42.901 "name": "pt2", 00:14:42.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.901 "is_configured": true, 00:14:42.901 "data_offset": 2048, 00:14:42.901 "data_size": 63488 00:14:42.901 }, 00:14:42.901 { 00:14:42.901 "name": "pt3", 00:14:42.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.901 "is_configured": true, 00:14:42.901 "data_offset": 2048, 00:14:42.901 "data_size": 63488 00:14:42.901 } 00:14:42.901 ] 00:14:42.901 }' 00:14:42.901 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.901 06:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:43.159 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:43.418 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:43.418 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.418 06:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:43.701 [2024-07-23 06:27:56.169374] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.701 06:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' a9a8d123-48bc-11ef-a06c-59ddad71024c '!=' a9a8d123-48bc-11ef-a06c-59ddad71024c ']' 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57609 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57609 ']' 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57609 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57609 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:43.702 killing process with pid 57609 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57609' 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57609 00:14:43.702 [2024-07-23 06:27:56.198704] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.702 [2024-07-23 06:27:56.198725] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.702 [2024-07-23 06:27:56.198740] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.702 [2024-07-23 06:27:56.198744] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x288f9c235180 name raid_bdev1, state offline 00:14:43.702 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57609 00:14:43.702 [2024-07-23 06:27:56.216916] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.961 06:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:43.961 00:14:43.961 real 0m18.408s 00:14:43.961 user 0m33.220s 00:14:43.961 sys 0m2.808s 00:14:43.961 ************************************ 00:14:43.961 END TEST raid_superblock_test 00:14:43.961 ************************************ 00:14:43.961 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.961 06:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.961 06:27:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:43.961 06:27:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:43.961 06:27:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:43.961 06:27:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.961 06:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.961 ************************************ 00:14:43.961 START TEST raid_read_error_test 00:14:43.961 ************************************ 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0NPtewzjdi 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58159 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58159 /var/tmp/spdk-raid.sock 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 58159 ']' 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.961 06:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.961 [2024-07-23 06:27:56.458937] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:43.961 [2024-07-23 06:27:56.459237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:44.528 EAL: TSC is not safe to use in SMP mode 00:14:44.528 EAL: TSC is not invariant 00:14:44.528 [2024-07-23 06:27:57.000802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.787 [2024-07-23 06:27:57.092526] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:44.787 [2024-07-23 06:27:57.094929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.787 [2024-07-23 06:27:57.095784] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.787 [2024-07-23 06:27:57.095801] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.046 06:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.046 06:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:45.046 06:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:45.046 06:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:45.304 BaseBdev1_malloc 00:14:45.304 06:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:45.563 true 00:14:45.563 06:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:45.823 [2024-07-23 06:27:58.248023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:45.823 [2024-07-23 06:27:58.248094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.823 [2024-07-23 06:27:58.248136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x176e1a634780 00:14:45.823 [2024-07-23 06:27:58.248144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.823 [2024-07-23 06:27:58.248867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.823 [2024-07-23 06:27:58.248898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:45.823 BaseBdev1 00:14:45.823 06:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:45.823 06:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:46.085 BaseBdev2_malloc 00:14:46.085 06:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:46.344 true 00:14:46.344 06:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:46.604 [2024-07-23 06:27:59.012172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:46.604 [2024-07-23 06:27:59.012257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.604 [2024-07-23 06:27:59.012284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x176e1a634c80 00:14:46.604 [2024-07-23 06:27:59.012293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.604 [2024-07-23 06:27:59.013045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.604 [2024-07-23 06:27:59.013073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:46.604 BaseBdev2 00:14:46.604 06:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:46.604 06:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:46.863 BaseBdev3_malloc 00:14:46.863 06:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:47.122 true 00:14:47.122 06:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:47.381 [2024-07-23 06:27:59.680220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:47.381 [2024-07-23 06:27:59.680287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.381 [2024-07-23 06:27:59.680315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x176e1a635180 00:14:47.381 [2024-07-23 06:27:59.680324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.381 [2024-07-23 06:27:59.680979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.381 [2024-07-23 06:27:59.681007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:47.381 BaseBdev3 00:14:47.381 06:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:47.640 [2024-07-23 06:28:00.008251] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.640 [2024-07-23 06:28:00.008914] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.640 [2024-07-23 06:28:00.008948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.640 [2024-07-23 06:28:00.009021] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x176e1a635400 00:14:47.640 [2024-07-23 06:28:00.009028] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.640 [2024-07-23 06:28:00.009072] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x176e1a6a0e20 00:14:47.640 [2024-07-23 06:28:00.009184] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x176e1a635400 00:14:47.640 [2024-07-23 06:28:00.009188] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x176e1a635400 00:14:47.641 [2024-07-23 06:28:00.009217] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.641 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.900 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.900 "name": "raid_bdev1", 00:14:47.900 "uuid": "b508d9f7-48bc-11ef-a06c-59ddad71024c", 00:14:47.900 "strip_size_kb": 0, 00:14:47.900 "state": "online", 00:14:47.900 "raid_level": "raid1", 00:14:47.900 "superblock": true, 00:14:47.900 "num_base_bdevs": 3, 00:14:47.900 "num_base_bdevs_discovered": 3, 00:14:47.900 "num_base_bdevs_operational": 3, 00:14:47.900 "base_bdevs_list": [ 00:14:47.900 { 00:14:47.900 "name": "BaseBdev1", 00:14:47.900 "uuid": "276c65d2-3ec5-1855-85a5-2af5587647eb", 00:14:47.900 "is_configured": true, 00:14:47.900 "data_offset": 2048, 00:14:47.900 "data_size": 63488 00:14:47.900 }, 00:14:47.900 { 00:14:47.900 "name": "BaseBdev2", 00:14:47.900 "uuid": "2a39b7f9-29a9-a455-b2b6-da46e9a2fe12", 00:14:47.900 "is_configured": true, 00:14:47.900 "data_offset": 2048, 00:14:47.900 "data_size": 63488 00:14:47.900 }, 00:14:47.900 { 00:14:47.900 "name": "BaseBdev3", 00:14:47.900 "uuid": "128a2b0c-ac7d-d154-903c-f381ff7fe553", 00:14:47.900 "is_configured": true, 00:14:47.900 "data_offset": 2048, 00:14:47.900 "data_size": 63488 00:14:47.900 } 00:14:47.900 ] 00:14:47.900 }' 00:14:47.900 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.900 06:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.166 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:48.166 06:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:48.166 [2024-07-23 06:28:00.676487] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x176e1a6a0ec0 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.562 06:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.821 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.821 "name": "raid_bdev1", 00:14:49.821 "uuid": "b508d9f7-48bc-11ef-a06c-59ddad71024c", 00:14:49.821 "strip_size_kb": 0, 00:14:49.821 "state": "online", 00:14:49.821 "raid_level": "raid1", 00:14:49.821 "superblock": true, 00:14:49.821 "num_base_bdevs": 3, 00:14:49.821 "num_base_bdevs_discovered": 3, 00:14:49.821 "num_base_bdevs_operational": 3, 00:14:49.821 "base_bdevs_list": [ 00:14:49.821 { 00:14:49.821 "name": "BaseBdev1", 00:14:49.821 "uuid": "276c65d2-3ec5-1855-85a5-2af5587647eb", 00:14:49.821 "is_configured": true, 00:14:49.821 "data_offset": 2048, 00:14:49.821 "data_size": 63488 00:14:49.821 }, 00:14:49.821 { 00:14:49.821 "name": "BaseBdev2", 00:14:49.821 "uuid": "2a39b7f9-29a9-a455-b2b6-da46e9a2fe12", 00:14:49.821 "is_configured": true, 00:14:49.821 "data_offset": 2048, 00:14:49.821 "data_size": 63488 00:14:49.821 }, 00:14:49.821 { 00:14:49.821 "name": "BaseBdev3", 00:14:49.821 "uuid": "128a2b0c-ac7d-d154-903c-f381ff7fe553", 00:14:49.821 "is_configured": true, 00:14:49.821 "data_offset": 2048, 00:14:49.821 "data_size": 63488 00:14:49.821 } 00:14:49.822 ] 00:14:49.822 }' 00:14:49.822 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.822 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.081 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:50.340 [2024-07-23 06:28:02.752290] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.340 [2024-07-23 06:28:02.752319] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.340 [2024-07-23 06:28:02.752683] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.340 [2024-07-23 06:28:02.752694] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.340 [2024-07-23 06:28:02.752710] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.340 [2024-07-23 06:28:02.752714] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x176e1a635400 name raid_bdev1, state offline 00:14:50.340 0 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58159 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 58159 ']' 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 58159 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58159 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:50.340 killing process with pid 58159 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58159' 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 58159 00:14:50.340 [2024-07-23 06:28:02.781470] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.340 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 58159 00:14:50.340 [2024-07-23 06:28:02.800270] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0NPtewzjdi 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:50.601 00:14:50.601 real 0m6.547s 00:14:50.601 user 0m10.177s 00:14:50.601 sys 0m1.141s 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.601 ************************************ 00:14:50.601 END TEST raid_read_error_test 00:14:50.601 ************************************ 00:14:50.601 06:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.601 06:28:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:50.601 06:28:03 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:50.601 06:28:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:50.601 06:28:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.601 06:28:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.601 ************************************ 00:14:50.601 START TEST raid_write_error_test 00:14:50.601 ************************************ 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hxgntG7m9l 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58290 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58290 /var/tmp/spdk-raid.sock 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 58290 ']' 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.601 06:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.601 [2024-07-23 06:28:03.056225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:50.601 [2024-07-23 06:28:03.056428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:51.173 EAL: TSC is not safe to use in SMP mode 00:14:51.173 EAL: TSC is not invariant 00:14:51.173 [2024-07-23 06:28:03.619037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.432 [2024-07-23 06:28:03.701129] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:51.432 [2024-07-23 06:28:03.703708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.432 [2024-07-23 06:28:03.704634] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.432 [2024-07-23 06:28:03.704648] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.690 06:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.690 06:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:51.690 06:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:51.690 06:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.947 BaseBdev1_malloc 00:14:51.947 06:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:52.205 true 00:14:52.205 06:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:52.464 [2024-07-23 06:28:04.932049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:52.464 [2024-07-23 06:28:04.932128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.464 [2024-07-23 06:28:04.932168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37f8f8634780 00:14:52.464 [2024-07-23 06:28:04.932176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.464 [2024-07-23 06:28:04.932892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.464 [2024-07-23 06:28:04.932920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.464 BaseBdev1 00:14:52.464 06:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:52.464 06:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:52.722 BaseBdev2_malloc 00:14:52.722 06:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:52.980 true 00:14:52.980 06:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:53.238 [2024-07-23 06:28:05.660098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:53.238 [2024-07-23 06:28:05.660188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.238 [2024-07-23 06:28:05.660229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37f8f8634c80 00:14:53.238 [2024-07-23 06:28:05.660238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.238 [2024-07-23 06:28:05.661067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.238 [2024-07-23 06:28:05.661094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:53.238 BaseBdev2 00:14:53.238 06:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:53.238 06:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:53.497 BaseBdev3_malloc 00:14:53.497 06:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:53.755 true 00:14:53.755 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:54.013 [2024-07-23 06:28:06.512175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:54.013 [2024-07-23 06:28:06.512237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.013 [2024-07-23 06:28:06.512263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37f8f8635180 00:14:54.013 [2024-07-23 06:28:06.512272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.013 [2024-07-23 06:28:06.512920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.013 [2024-07-23 06:28:06.512945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:54.013 BaseBdev3 00:14:54.013 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:54.580 [2024-07-23 06:28:06.812218] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.580 [2024-07-23 06:28:06.812895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.580 [2024-07-23 06:28:06.812921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.580 [2024-07-23 06:28:06.812981] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x37f8f8635400 00:14:54.580 [2024-07-23 06:28:06.812988] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.580 [2024-07-23 06:28:06.813020] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x37f8f86a0e20 00:14:54.580 [2024-07-23 06:28:06.813100] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x37f8f8635400 00:14:54.580 [2024-07-23 06:28:06.813105] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x37f8f8635400 00:14:54.580 [2024-07-23 06:28:06.813143] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.580 06:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.580 06:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.580 "name": "raid_bdev1", 00:14:54.580 "uuid": "b9170d74-48bc-11ef-a06c-59ddad71024c", 00:14:54.580 "strip_size_kb": 0, 00:14:54.580 "state": "online", 00:14:54.580 "raid_level": "raid1", 00:14:54.580 "superblock": true, 00:14:54.580 "num_base_bdevs": 3, 00:14:54.580 "num_base_bdevs_discovered": 3, 00:14:54.580 "num_base_bdevs_operational": 3, 00:14:54.580 "base_bdevs_list": [ 00:14:54.580 { 00:14:54.580 "name": "BaseBdev1", 00:14:54.580 "uuid": "bd2a2dc3-e06f-315a-afa9-3952fd78e59f", 00:14:54.580 "is_configured": true, 00:14:54.580 "data_offset": 2048, 00:14:54.580 "data_size": 63488 00:14:54.580 }, 00:14:54.580 { 00:14:54.580 "name": "BaseBdev2", 00:14:54.580 "uuid": "e4fb1837-5831-8955-afc4-29f968a78516", 00:14:54.580 "is_configured": true, 00:14:54.580 "data_offset": 2048, 00:14:54.580 "data_size": 63488 00:14:54.580 }, 00:14:54.580 { 00:14:54.580 "name": "BaseBdev3", 00:14:54.580 "uuid": "315c9afe-9a6b-c55d-b35e-c6aca5b571dd", 00:14:54.580 "is_configured": true, 00:14:54.580 "data_offset": 2048, 00:14:54.580 "data_size": 63488 00:14:54.580 } 00:14:54.580 ] 00:14:54.580 }' 00:14:54.580 06:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.580 06:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.148 06:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:55.148 06:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:55.148 [2024-07-23 06:28:07.524453] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x37f8f86a0ec0 00:14:56.084 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:56.343 [2024-07-23 06:28:08.717544] bdev_raid.c:2248:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:56.343 [2024-07-23 06:28:08.717632] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.343 [2024-07-23 06:28:08.717775] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x37f8f86a0ec0 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.343 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.602 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.602 "name": "raid_bdev1", 00:14:56.602 "uuid": "b9170d74-48bc-11ef-a06c-59ddad71024c", 00:14:56.602 "strip_size_kb": 0, 00:14:56.602 "state": "online", 00:14:56.602 "raid_level": "raid1", 00:14:56.602 "superblock": true, 00:14:56.602 "num_base_bdevs": 3, 00:14:56.602 "num_base_bdevs_discovered": 2, 00:14:56.602 "num_base_bdevs_operational": 2, 00:14:56.602 "base_bdevs_list": [ 00:14:56.602 { 00:14:56.602 "name": null, 00:14:56.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.602 "is_configured": false, 00:14:56.602 "data_offset": 2048, 00:14:56.602 "data_size": 63488 00:14:56.602 }, 00:14:56.602 { 00:14:56.602 "name": "BaseBdev2", 00:14:56.602 "uuid": "e4fb1837-5831-8955-afc4-29f968a78516", 00:14:56.602 "is_configured": true, 00:14:56.602 "data_offset": 2048, 00:14:56.602 "data_size": 63488 00:14:56.602 }, 00:14:56.602 { 00:14:56.602 "name": "BaseBdev3", 00:14:56.602 "uuid": "315c9afe-9a6b-c55d-b35e-c6aca5b571dd", 00:14:56.602 "is_configured": true, 00:14:56.602 "data_offset": 2048, 00:14:56.602 "data_size": 63488 00:14:56.602 } 00:14:56.602 ] 00:14:56.602 }' 00:14:56.602 06:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.602 06:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.861 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:57.121 [2024-07-23 06:28:09.589439] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.121 [2024-07-23 06:28:09.589465] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.121 [2024-07-23 06:28:09.589785] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.121 [2024-07-23 06:28:09.589794] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.121 [2024-07-23 06:28:09.589807] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.121 [2024-07-23 06:28:09.589811] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37f8f8635400 name raid_bdev1, state offline 00:14:57.121 0 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58290 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 58290 ']' 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 58290 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58290 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:57.121 killing process with pid 58290 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58290' 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 58290 00:14:57.121 [2024-07-23 06:28:09.616938] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.121 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 58290 00:14:57.121 [2024-07-23 06:28:09.635573] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hxgntG7m9l 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:57.380 00:14:57.380 real 0m6.792s 00:14:57.380 user 0m10.690s 00:14:57.380 sys 0m1.150s 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.380 06:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 ************************************ 00:14:57.380 END TEST raid_write_error_test 00:14:57.380 ************************************ 00:14:57.380 06:28:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:57.380 06:28:09 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:57.380 06:28:09 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:57.380 06:28:09 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:57.380 06:28:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:57.380 06:28:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.380 06:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 ************************************ 00:14:57.380 START TEST raid_state_function_test 00:14:57.380 ************************************ 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58423 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58423' 00:14:57.380 Process raid pid: 58423 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58423 /var/tmp/spdk-raid.sock 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58423 ']' 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:57.380 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:57.381 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.381 06:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.381 [2024-07-23 06:28:09.895499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:57.381 [2024-07-23 06:28:09.895770] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:57.947 EAL: TSC is not safe to use in SMP mode 00:14:57.947 EAL: TSC is not invariant 00:14:57.947 [2024-07-23 06:28:10.446441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.206 [2024-07-23 06:28:10.536462] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:58.206 [2024-07-23 06:28:10.538588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.206 [2024-07-23 06:28:10.539344] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.206 [2024-07-23 06:28:10.539373] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.773 06:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.773 06:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:58.773 06:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:58.773 [2024-07-23 06:28:11.211822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.773 [2024-07-23 06:28:11.211951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.773 [2024-07-23 06:28:11.211956] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.773 [2024-07-23 06:28:11.211981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.773 [2024-07-23 06:28:11.211984] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.773 [2024-07-23 06:28:11.211992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.773 [2024-07-23 06:28:11.211995] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:58.774 [2024-07-23 06:28:11.212002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.774 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.032 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.032 "name": "Existed_Raid", 00:14:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.032 "strip_size_kb": 64, 00:14:59.032 "state": "configuring", 00:14:59.032 "raid_level": "raid0", 00:14:59.032 "superblock": false, 00:14:59.032 "num_base_bdevs": 4, 00:14:59.032 "num_base_bdevs_discovered": 0, 00:14:59.032 "num_base_bdevs_operational": 4, 00:14:59.032 "base_bdevs_list": [ 00:14:59.032 { 00:14:59.032 "name": "BaseBdev1", 00:14:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.032 "is_configured": false, 00:14:59.032 "data_offset": 0, 00:14:59.032 "data_size": 0 00:14:59.032 }, 00:14:59.032 { 00:14:59.032 "name": "BaseBdev2", 00:14:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.032 "is_configured": false, 00:14:59.032 "data_offset": 0, 00:14:59.032 "data_size": 0 00:14:59.032 }, 00:14:59.032 { 00:14:59.032 "name": "BaseBdev3", 00:14:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.032 "is_configured": false, 00:14:59.032 "data_offset": 0, 00:14:59.032 "data_size": 0 00:14:59.032 }, 00:14:59.032 { 00:14:59.032 "name": "BaseBdev4", 00:14:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.032 "is_configured": false, 00:14:59.032 "data_offset": 0, 00:14:59.032 "data_size": 0 00:14:59.032 } 00:14:59.032 ] 00:14:59.032 }' 00:14:59.032 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.032 06:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.599 06:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:59.599 [2024-07-23 06:28:12.063872] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.599 [2024-07-23 06:28:12.063925] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x29176ea34500 name Existed_Raid, state configuring 00:14:59.599 06:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:59.857 [2024-07-23 06:28:12.295985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.857 [2024-07-23 06:28:12.296045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.857 [2024-07-23 06:28:12.296067] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.857 [2024-07-23 06:28:12.296074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.857 [2024-07-23 06:28:12.296077] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.857 [2024-07-23 06:28:12.296084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.857 [2024-07-23 06:28:12.296087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:59.857 [2024-07-23 06:28:12.296094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:59.857 06:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.115 [2024-07-23 06:28:12.537000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.115 BaseBdev1 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.115 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.373 06:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.632 [ 00:15:00.632 { 00:15:00.632 "name": "BaseBdev1", 00:15:00.632 "aliases": [ 00:15:00.632 "bc806fb0-48bc-11ef-a06c-59ddad71024c" 00:15:00.632 ], 00:15:00.632 "product_name": "Malloc disk", 00:15:00.632 "block_size": 512, 00:15:00.632 "num_blocks": 65536, 00:15:00.632 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:00.632 "assigned_rate_limits": { 00:15:00.632 "rw_ios_per_sec": 0, 00:15:00.632 "rw_mbytes_per_sec": 0, 00:15:00.632 "r_mbytes_per_sec": 0, 00:15:00.632 "w_mbytes_per_sec": 0 00:15:00.632 }, 00:15:00.632 "claimed": true, 00:15:00.632 "claim_type": "exclusive_write", 00:15:00.632 "zoned": false, 00:15:00.632 "supported_io_types": { 00:15:00.632 "read": true, 00:15:00.632 "write": true, 00:15:00.632 "unmap": true, 00:15:00.632 "flush": true, 00:15:00.632 "reset": true, 00:15:00.632 "nvme_admin": false, 00:15:00.632 "nvme_io": false, 00:15:00.632 "nvme_io_md": false, 00:15:00.632 "write_zeroes": true, 00:15:00.632 "zcopy": true, 00:15:00.632 "get_zone_info": false, 00:15:00.632 "zone_management": false, 00:15:00.632 "zone_append": false, 00:15:00.632 "compare": false, 00:15:00.632 "compare_and_write": false, 00:15:00.632 "abort": true, 00:15:00.632 "seek_hole": false, 00:15:00.632 "seek_data": false, 00:15:00.632 "copy": true, 00:15:00.632 "nvme_iov_md": false 00:15:00.632 }, 00:15:00.632 "memory_domains": [ 00:15:00.632 { 00:15:00.632 "dma_device_id": "system", 00:15:00.632 "dma_device_type": 1 00:15:00.632 }, 00:15:00.632 { 00:15:00.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.632 "dma_device_type": 2 00:15:00.632 } 00:15:00.632 ], 00:15:00.632 "driver_specific": {} 00:15:00.632 } 00:15:00.632 ] 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.632 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.894 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.894 "name": "Existed_Raid", 00:15:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.894 "strip_size_kb": 64, 00:15:00.894 "state": "configuring", 00:15:00.894 "raid_level": "raid0", 00:15:00.894 "superblock": false, 00:15:00.894 "num_base_bdevs": 4, 00:15:00.894 "num_base_bdevs_discovered": 1, 00:15:00.894 "num_base_bdevs_operational": 4, 00:15:00.894 "base_bdevs_list": [ 00:15:00.894 { 00:15:00.894 "name": "BaseBdev1", 00:15:00.894 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:00.894 "is_configured": true, 00:15:00.894 "data_offset": 0, 00:15:00.894 "data_size": 65536 00:15:00.894 }, 00:15:00.894 { 00:15:00.894 "name": "BaseBdev2", 00:15:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.894 "is_configured": false, 00:15:00.894 "data_offset": 0, 00:15:00.894 "data_size": 0 00:15:00.894 }, 00:15:00.894 { 00:15:00.894 "name": "BaseBdev3", 00:15:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.894 "is_configured": false, 00:15:00.894 "data_offset": 0, 00:15:00.894 "data_size": 0 00:15:00.894 }, 00:15:00.894 { 00:15:00.894 "name": "BaseBdev4", 00:15:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.894 "is_configured": false, 00:15:00.894 "data_offset": 0, 00:15:00.894 "data_size": 0 00:15:00.894 } 00:15:00.894 ] 00:15:00.894 }' 00:15:00.894 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.894 06:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.414 [2024-07-23 06:28:13.864115] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.414 [2024-07-23 06:28:13.864144] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x29176ea34500 name Existed_Raid, state configuring 00:15:01.414 06:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:01.699 [2024-07-23 06:28:14.104129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.699 [2024-07-23 06:28:14.104981] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.699 [2024-07-23 06:28:14.105017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.699 [2024-07-23 06:28:14.105022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.699 [2024-07-23 06:28:14.105031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.699 [2024-07-23 06:28:14.105034] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.699 [2024-07-23 06:28:14.105041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.699 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:01.699 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:01.699 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.699 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.699 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.699 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.700 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.957 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.957 "name": "Existed_Raid", 00:15:01.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.957 "strip_size_kb": 64, 00:15:01.957 "state": "configuring", 00:15:01.957 "raid_level": "raid0", 00:15:01.957 "superblock": false, 00:15:01.957 "num_base_bdevs": 4, 00:15:01.957 "num_base_bdevs_discovered": 1, 00:15:01.957 "num_base_bdevs_operational": 4, 00:15:01.957 "base_bdevs_list": [ 00:15:01.957 { 00:15:01.958 "name": "BaseBdev1", 00:15:01.958 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:01.958 "is_configured": true, 00:15:01.958 "data_offset": 0, 00:15:01.958 "data_size": 65536 00:15:01.958 }, 00:15:01.958 { 00:15:01.958 "name": "BaseBdev2", 00:15:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.958 "is_configured": false, 00:15:01.958 "data_offset": 0, 00:15:01.958 "data_size": 0 00:15:01.958 }, 00:15:01.958 { 00:15:01.958 "name": "BaseBdev3", 00:15:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.958 "is_configured": false, 00:15:01.958 "data_offset": 0, 00:15:01.958 "data_size": 0 00:15:01.958 }, 00:15:01.958 { 00:15:01.958 "name": "BaseBdev4", 00:15:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.958 "is_configured": false, 00:15:01.958 "data_offset": 0, 00:15:01.958 "data_size": 0 00:15:01.958 } 00:15:01.958 ] 00:15:01.958 }' 00:15:01.958 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.958 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.216 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.474 [2024-07-23 06:28:14.952338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.474 BaseBdev2 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.474 06:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.734 06:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:03.000 [ 00:15:03.000 { 00:15:03.000 "name": "BaseBdev2", 00:15:03.000 "aliases": [ 00:15:03.000 "bdf11de3-48bc-11ef-a06c-59ddad71024c" 00:15:03.000 ], 00:15:03.000 "product_name": "Malloc disk", 00:15:03.000 "block_size": 512, 00:15:03.000 "num_blocks": 65536, 00:15:03.000 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:03.000 "assigned_rate_limits": { 00:15:03.000 "rw_ios_per_sec": 0, 00:15:03.000 "rw_mbytes_per_sec": 0, 00:15:03.000 "r_mbytes_per_sec": 0, 00:15:03.000 "w_mbytes_per_sec": 0 00:15:03.000 }, 00:15:03.000 "claimed": true, 00:15:03.000 "claim_type": "exclusive_write", 00:15:03.000 "zoned": false, 00:15:03.000 "supported_io_types": { 00:15:03.000 "read": true, 00:15:03.000 "write": true, 00:15:03.000 "unmap": true, 00:15:03.000 "flush": true, 00:15:03.000 "reset": true, 00:15:03.000 "nvme_admin": false, 00:15:03.000 "nvme_io": false, 00:15:03.000 "nvme_io_md": false, 00:15:03.000 "write_zeroes": true, 00:15:03.000 "zcopy": true, 00:15:03.000 "get_zone_info": false, 00:15:03.000 "zone_management": false, 00:15:03.000 "zone_append": false, 00:15:03.000 "compare": false, 00:15:03.000 "compare_and_write": false, 00:15:03.000 "abort": true, 00:15:03.000 "seek_hole": false, 00:15:03.000 "seek_data": false, 00:15:03.000 "copy": true, 00:15:03.000 "nvme_iov_md": false 00:15:03.000 }, 00:15:03.000 "memory_domains": [ 00:15:03.000 { 00:15:03.000 "dma_device_id": "system", 00:15:03.000 "dma_device_type": 1 00:15:03.000 }, 00:15:03.000 { 00:15:03.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.000 "dma_device_type": 2 00:15:03.001 } 00:15:03.001 ], 00:15:03.001 "driver_specific": {} 00:15:03.001 } 00:15:03.001 ] 00:15:03.001 06:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:03.001 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:03.001 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.260 "name": "Existed_Raid", 00:15:03.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.260 "strip_size_kb": 64, 00:15:03.260 "state": "configuring", 00:15:03.260 "raid_level": "raid0", 00:15:03.260 "superblock": false, 00:15:03.260 "num_base_bdevs": 4, 00:15:03.260 "num_base_bdevs_discovered": 2, 00:15:03.260 "num_base_bdevs_operational": 4, 00:15:03.260 "base_bdevs_list": [ 00:15:03.260 { 00:15:03.260 "name": "BaseBdev1", 00:15:03.260 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:03.260 "is_configured": true, 00:15:03.260 "data_offset": 0, 00:15:03.260 "data_size": 65536 00:15:03.260 }, 00:15:03.260 { 00:15:03.260 "name": "BaseBdev2", 00:15:03.260 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:03.260 "is_configured": true, 00:15:03.260 "data_offset": 0, 00:15:03.260 "data_size": 65536 00:15:03.260 }, 00:15:03.260 { 00:15:03.260 "name": "BaseBdev3", 00:15:03.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.260 "is_configured": false, 00:15:03.260 "data_offset": 0, 00:15:03.260 "data_size": 0 00:15:03.260 }, 00:15:03.260 { 00:15:03.260 "name": "BaseBdev4", 00:15:03.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.260 "is_configured": false, 00:15:03.260 "data_offset": 0, 00:15:03.260 "data_size": 0 00:15:03.260 } 00:15:03.260 ] 00:15:03.260 }' 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.260 06:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.828 [2024-07-23 06:28:16.296416] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.828 BaseBdev3 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.828 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.087 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.345 [ 00:15:04.345 { 00:15:04.345 "name": "BaseBdev3", 00:15:04.345 "aliases": [ 00:15:04.345 "bebe367f-48bc-11ef-a06c-59ddad71024c" 00:15:04.345 ], 00:15:04.345 "product_name": "Malloc disk", 00:15:04.345 "block_size": 512, 00:15:04.345 "num_blocks": 65536, 00:15:04.345 "uuid": "bebe367f-48bc-11ef-a06c-59ddad71024c", 00:15:04.345 "assigned_rate_limits": { 00:15:04.345 "rw_ios_per_sec": 0, 00:15:04.345 "rw_mbytes_per_sec": 0, 00:15:04.345 "r_mbytes_per_sec": 0, 00:15:04.345 "w_mbytes_per_sec": 0 00:15:04.345 }, 00:15:04.345 "claimed": true, 00:15:04.345 "claim_type": "exclusive_write", 00:15:04.345 "zoned": false, 00:15:04.345 "supported_io_types": { 00:15:04.345 "read": true, 00:15:04.345 "write": true, 00:15:04.345 "unmap": true, 00:15:04.345 "flush": true, 00:15:04.345 "reset": true, 00:15:04.345 "nvme_admin": false, 00:15:04.345 "nvme_io": false, 00:15:04.345 "nvme_io_md": false, 00:15:04.345 "write_zeroes": true, 00:15:04.345 "zcopy": true, 00:15:04.345 "get_zone_info": false, 00:15:04.345 "zone_management": false, 00:15:04.345 "zone_append": false, 00:15:04.345 "compare": false, 00:15:04.345 "compare_and_write": false, 00:15:04.345 "abort": true, 00:15:04.345 "seek_hole": false, 00:15:04.345 "seek_data": false, 00:15:04.345 "copy": true, 00:15:04.345 "nvme_iov_md": false 00:15:04.345 }, 00:15:04.345 "memory_domains": [ 00:15:04.345 { 00:15:04.345 "dma_device_id": "system", 00:15:04.345 "dma_device_type": 1 00:15:04.345 }, 00:15:04.345 { 00:15:04.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.345 "dma_device_type": 2 00:15:04.345 } 00:15:04.345 ], 00:15:04.345 "driver_specific": {} 00:15:04.345 } 00:15:04.345 ] 00:15:04.345 06:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:04.345 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.346 06:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.604 06:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.604 "name": "Existed_Raid", 00:15:04.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.604 "strip_size_kb": 64, 00:15:04.604 "state": "configuring", 00:15:04.604 "raid_level": "raid0", 00:15:04.604 "superblock": false, 00:15:04.604 "num_base_bdevs": 4, 00:15:04.604 "num_base_bdevs_discovered": 3, 00:15:04.604 "num_base_bdevs_operational": 4, 00:15:04.604 "base_bdevs_list": [ 00:15:04.604 { 00:15:04.604 "name": "BaseBdev1", 00:15:04.604 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:04.604 "is_configured": true, 00:15:04.604 "data_offset": 0, 00:15:04.604 "data_size": 65536 00:15:04.604 }, 00:15:04.604 { 00:15:04.604 "name": "BaseBdev2", 00:15:04.604 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:04.604 "is_configured": true, 00:15:04.604 "data_offset": 0, 00:15:04.605 "data_size": 65536 00:15:04.605 }, 00:15:04.605 { 00:15:04.605 "name": "BaseBdev3", 00:15:04.605 "uuid": "bebe367f-48bc-11ef-a06c-59ddad71024c", 00:15:04.605 "is_configured": true, 00:15:04.605 "data_offset": 0, 00:15:04.605 "data_size": 65536 00:15:04.605 }, 00:15:04.605 { 00:15:04.605 "name": "BaseBdev4", 00:15:04.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.605 "is_configured": false, 00:15:04.605 "data_offset": 0, 00:15:04.605 "data_size": 0 00:15:04.605 } 00:15:04.605 ] 00:15:04.605 }' 00:15:04.605 06:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.605 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.863 06:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:05.121 [2024-07-23 06:28:17.588492] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.121 [2024-07-23 06:28:17.588528] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x29176ea34a00 00:15:05.121 [2024-07-23 06:28:17.588532] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:05.121 [2024-07-23 06:28:17.588585] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x29176ea97e20 00:15:05.121 [2024-07-23 06:28:17.588685] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x29176ea34a00 00:15:05.121 [2024-07-23 06:28:17.588705] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x29176ea34a00 00:15:05.121 [2024-07-23 06:28:17.588736] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.121 BaseBdev4 00:15:05.121 06:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:05.121 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:05.121 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:05.121 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:05.121 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:05.121 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:05.122 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.380 06:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:05.638 [ 00:15:05.638 { 00:15:05.638 "name": "BaseBdev4", 00:15:05.638 "aliases": [ 00:15:05.638 "bf835e3b-48bc-11ef-a06c-59ddad71024c" 00:15:05.638 ], 00:15:05.638 "product_name": "Malloc disk", 00:15:05.638 "block_size": 512, 00:15:05.638 "num_blocks": 65536, 00:15:05.638 "uuid": "bf835e3b-48bc-11ef-a06c-59ddad71024c", 00:15:05.638 "assigned_rate_limits": { 00:15:05.638 "rw_ios_per_sec": 0, 00:15:05.638 "rw_mbytes_per_sec": 0, 00:15:05.638 "r_mbytes_per_sec": 0, 00:15:05.638 "w_mbytes_per_sec": 0 00:15:05.638 }, 00:15:05.638 "claimed": true, 00:15:05.638 "claim_type": "exclusive_write", 00:15:05.638 "zoned": false, 00:15:05.638 "supported_io_types": { 00:15:05.638 "read": true, 00:15:05.638 "write": true, 00:15:05.638 "unmap": true, 00:15:05.638 "flush": true, 00:15:05.638 "reset": true, 00:15:05.638 "nvme_admin": false, 00:15:05.638 "nvme_io": false, 00:15:05.638 "nvme_io_md": false, 00:15:05.638 "write_zeroes": true, 00:15:05.638 "zcopy": true, 00:15:05.638 "get_zone_info": false, 00:15:05.638 "zone_management": false, 00:15:05.638 "zone_append": false, 00:15:05.638 "compare": false, 00:15:05.638 "compare_and_write": false, 00:15:05.638 "abort": true, 00:15:05.638 "seek_hole": false, 00:15:05.638 "seek_data": false, 00:15:05.638 "copy": true, 00:15:05.638 "nvme_iov_md": false 00:15:05.638 }, 00:15:05.638 "memory_domains": [ 00:15:05.638 { 00:15:05.638 "dma_device_id": "system", 00:15:05.638 "dma_device_type": 1 00:15:05.638 }, 00:15:05.638 { 00:15:05.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.638 "dma_device_type": 2 00:15:05.638 } 00:15:05.638 ], 00:15:05.638 "driver_specific": {} 00:15:05.638 } 00:15:05.638 ] 00:15:05.638 06:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:05.638 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.639 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.902 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.902 "name": "Existed_Raid", 00:15:05.902 "uuid": "bf836514-48bc-11ef-a06c-59ddad71024c", 00:15:05.902 "strip_size_kb": 64, 00:15:05.902 "state": "online", 00:15:05.902 "raid_level": "raid0", 00:15:05.902 "superblock": false, 00:15:05.902 "num_base_bdevs": 4, 00:15:05.902 "num_base_bdevs_discovered": 4, 00:15:05.902 "num_base_bdevs_operational": 4, 00:15:05.902 "base_bdevs_list": [ 00:15:05.902 { 00:15:05.902 "name": "BaseBdev1", 00:15:05.902 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:05.902 "is_configured": true, 00:15:05.902 "data_offset": 0, 00:15:05.902 "data_size": 65536 00:15:05.902 }, 00:15:05.902 { 00:15:05.903 "name": "BaseBdev2", 00:15:05.903 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:05.903 "is_configured": true, 00:15:05.903 "data_offset": 0, 00:15:05.903 "data_size": 65536 00:15:05.903 }, 00:15:05.903 { 00:15:05.903 "name": "BaseBdev3", 00:15:05.903 "uuid": "bebe367f-48bc-11ef-a06c-59ddad71024c", 00:15:05.903 "is_configured": true, 00:15:05.903 "data_offset": 0, 00:15:05.903 "data_size": 65536 00:15:05.903 }, 00:15:05.903 { 00:15:05.903 "name": "BaseBdev4", 00:15:05.903 "uuid": "bf835e3b-48bc-11ef-a06c-59ddad71024c", 00:15:05.903 "is_configured": true, 00:15:05.903 "data_offset": 0, 00:15:05.903 "data_size": 65536 00:15:05.903 } 00:15:05.903 ] 00:15:05.903 }' 00:15:05.903 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.903 06:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:06.472 [2024-07-23 06:28:18.928541] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.472 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:06.472 "name": "Existed_Raid", 00:15:06.472 "aliases": [ 00:15:06.472 "bf836514-48bc-11ef-a06c-59ddad71024c" 00:15:06.472 ], 00:15:06.472 "product_name": "Raid Volume", 00:15:06.472 "block_size": 512, 00:15:06.472 "num_blocks": 262144, 00:15:06.472 "uuid": "bf836514-48bc-11ef-a06c-59ddad71024c", 00:15:06.472 "assigned_rate_limits": { 00:15:06.472 "rw_ios_per_sec": 0, 00:15:06.472 "rw_mbytes_per_sec": 0, 00:15:06.472 "r_mbytes_per_sec": 0, 00:15:06.472 "w_mbytes_per_sec": 0 00:15:06.472 }, 00:15:06.472 "claimed": false, 00:15:06.472 "zoned": false, 00:15:06.472 "supported_io_types": { 00:15:06.472 "read": true, 00:15:06.472 "write": true, 00:15:06.472 "unmap": true, 00:15:06.472 "flush": true, 00:15:06.472 "reset": true, 00:15:06.472 "nvme_admin": false, 00:15:06.472 "nvme_io": false, 00:15:06.472 "nvme_io_md": false, 00:15:06.472 "write_zeroes": true, 00:15:06.472 "zcopy": false, 00:15:06.472 "get_zone_info": false, 00:15:06.472 "zone_management": false, 00:15:06.472 "zone_append": false, 00:15:06.472 "compare": false, 00:15:06.472 "compare_and_write": false, 00:15:06.472 "abort": false, 00:15:06.472 "seek_hole": false, 00:15:06.472 "seek_data": false, 00:15:06.472 "copy": false, 00:15:06.472 "nvme_iov_md": false 00:15:06.472 }, 00:15:06.472 "memory_domains": [ 00:15:06.472 { 00:15:06.472 "dma_device_id": "system", 00:15:06.472 "dma_device_type": 1 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.472 "dma_device_type": 2 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "system", 00:15:06.472 "dma_device_type": 1 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.472 "dma_device_type": 2 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "system", 00:15:06.472 "dma_device_type": 1 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.472 "dma_device_type": 2 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "system", 00:15:06.472 "dma_device_type": 1 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.472 "dma_device_type": 2 00:15:06.472 } 00:15:06.472 ], 00:15:06.472 "driver_specific": { 00:15:06.472 "raid": { 00:15:06.472 "uuid": "bf836514-48bc-11ef-a06c-59ddad71024c", 00:15:06.472 "strip_size_kb": 64, 00:15:06.472 "state": "online", 00:15:06.472 "raid_level": "raid0", 00:15:06.472 "superblock": false, 00:15:06.472 "num_base_bdevs": 4, 00:15:06.472 "num_base_bdevs_discovered": 4, 00:15:06.472 "num_base_bdevs_operational": 4, 00:15:06.472 "base_bdevs_list": [ 00:15:06.472 { 00:15:06.472 "name": "BaseBdev1", 00:15:06.472 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:06.472 "is_configured": true, 00:15:06.472 "data_offset": 0, 00:15:06.472 "data_size": 65536 00:15:06.472 }, 00:15:06.472 { 00:15:06.473 "name": "BaseBdev2", 00:15:06.473 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:06.473 "is_configured": true, 00:15:06.473 "data_offset": 0, 00:15:06.473 "data_size": 65536 00:15:06.473 }, 00:15:06.473 { 00:15:06.473 "name": "BaseBdev3", 00:15:06.473 "uuid": "bebe367f-48bc-11ef-a06c-59ddad71024c", 00:15:06.473 "is_configured": true, 00:15:06.473 "data_offset": 0, 00:15:06.473 "data_size": 65536 00:15:06.473 }, 00:15:06.473 { 00:15:06.473 "name": "BaseBdev4", 00:15:06.473 "uuid": "bf835e3b-48bc-11ef-a06c-59ddad71024c", 00:15:06.473 "is_configured": true, 00:15:06.473 "data_offset": 0, 00:15:06.473 "data_size": 65536 00:15:06.473 } 00:15:06.473 ] 00:15:06.473 } 00:15:06.473 } 00:15:06.473 }' 00:15:06.473 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.473 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:06.473 BaseBdev2 00:15:06.473 BaseBdev3 00:15:06.473 BaseBdev4' 00:15:06.473 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.473 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:06.473 06:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.731 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.731 "name": "BaseBdev1", 00:15:06.731 "aliases": [ 00:15:06.731 "bc806fb0-48bc-11ef-a06c-59ddad71024c" 00:15:06.731 ], 00:15:06.731 "product_name": "Malloc disk", 00:15:06.731 "block_size": 512, 00:15:06.731 "num_blocks": 65536, 00:15:06.731 "uuid": "bc806fb0-48bc-11ef-a06c-59ddad71024c", 00:15:06.731 "assigned_rate_limits": { 00:15:06.731 "rw_ios_per_sec": 0, 00:15:06.731 "rw_mbytes_per_sec": 0, 00:15:06.731 "r_mbytes_per_sec": 0, 00:15:06.731 "w_mbytes_per_sec": 0 00:15:06.731 }, 00:15:06.731 "claimed": true, 00:15:06.731 "claim_type": "exclusive_write", 00:15:06.731 "zoned": false, 00:15:06.731 "supported_io_types": { 00:15:06.731 "read": true, 00:15:06.731 "write": true, 00:15:06.731 "unmap": true, 00:15:06.731 "flush": true, 00:15:06.731 "reset": true, 00:15:06.731 "nvme_admin": false, 00:15:06.731 "nvme_io": false, 00:15:06.731 "nvme_io_md": false, 00:15:06.731 "write_zeroes": true, 00:15:06.731 "zcopy": true, 00:15:06.731 "get_zone_info": false, 00:15:06.731 "zone_management": false, 00:15:06.731 "zone_append": false, 00:15:06.731 "compare": false, 00:15:06.731 "compare_and_write": false, 00:15:06.731 "abort": true, 00:15:06.731 "seek_hole": false, 00:15:06.731 "seek_data": false, 00:15:06.731 "copy": true, 00:15:06.731 "nvme_iov_md": false 00:15:06.731 }, 00:15:06.731 "memory_domains": [ 00:15:06.731 { 00:15:06.731 "dma_device_id": "system", 00:15:06.731 "dma_device_type": 1 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.731 "dma_device_type": 2 00:15:06.731 } 00:15:06.731 ], 00:15:06.731 "driver_specific": {} 00:15:06.731 }' 00:15:06.731 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.731 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:06.990 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.249 "name": "BaseBdev2", 00:15:07.249 "aliases": [ 00:15:07.249 "bdf11de3-48bc-11ef-a06c-59ddad71024c" 00:15:07.249 ], 00:15:07.249 "product_name": "Malloc disk", 00:15:07.249 "block_size": 512, 00:15:07.249 "num_blocks": 65536, 00:15:07.249 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:07.249 "assigned_rate_limits": { 00:15:07.249 "rw_ios_per_sec": 0, 00:15:07.249 "rw_mbytes_per_sec": 0, 00:15:07.249 "r_mbytes_per_sec": 0, 00:15:07.249 "w_mbytes_per_sec": 0 00:15:07.249 }, 00:15:07.249 "claimed": true, 00:15:07.249 "claim_type": "exclusive_write", 00:15:07.249 "zoned": false, 00:15:07.249 "supported_io_types": { 00:15:07.249 "read": true, 00:15:07.249 "write": true, 00:15:07.249 "unmap": true, 00:15:07.249 "flush": true, 00:15:07.249 "reset": true, 00:15:07.249 "nvme_admin": false, 00:15:07.249 "nvme_io": false, 00:15:07.249 "nvme_io_md": false, 00:15:07.249 "write_zeroes": true, 00:15:07.249 "zcopy": true, 00:15:07.249 "get_zone_info": false, 00:15:07.249 "zone_management": false, 00:15:07.249 "zone_append": false, 00:15:07.249 "compare": false, 00:15:07.249 "compare_and_write": false, 00:15:07.249 "abort": true, 00:15:07.249 "seek_hole": false, 00:15:07.249 "seek_data": false, 00:15:07.249 "copy": true, 00:15:07.249 "nvme_iov_md": false 00:15:07.249 }, 00:15:07.249 "memory_domains": [ 00:15:07.249 { 00:15:07.249 "dma_device_id": "system", 00:15:07.249 "dma_device_type": 1 00:15:07.249 }, 00:15:07.249 { 00:15:07.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.249 "dma_device_type": 2 00:15:07.249 } 00:15:07.249 ], 00:15:07.249 "driver_specific": {} 00:15:07.249 }' 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:07.249 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.508 "name": "BaseBdev3", 00:15:07.508 "aliases": [ 00:15:07.508 "bebe367f-48bc-11ef-a06c-59ddad71024c" 00:15:07.508 ], 00:15:07.508 "product_name": "Malloc disk", 00:15:07.508 "block_size": 512, 00:15:07.508 "num_blocks": 65536, 00:15:07.508 "uuid": "bebe367f-48bc-11ef-a06c-59ddad71024c", 00:15:07.508 "assigned_rate_limits": { 00:15:07.508 "rw_ios_per_sec": 0, 00:15:07.508 "rw_mbytes_per_sec": 0, 00:15:07.508 "r_mbytes_per_sec": 0, 00:15:07.508 "w_mbytes_per_sec": 0 00:15:07.508 }, 00:15:07.508 "claimed": true, 00:15:07.508 "claim_type": "exclusive_write", 00:15:07.508 "zoned": false, 00:15:07.508 "supported_io_types": { 00:15:07.508 "read": true, 00:15:07.508 "write": true, 00:15:07.508 "unmap": true, 00:15:07.508 "flush": true, 00:15:07.508 "reset": true, 00:15:07.508 "nvme_admin": false, 00:15:07.508 "nvme_io": false, 00:15:07.508 "nvme_io_md": false, 00:15:07.508 "write_zeroes": true, 00:15:07.508 "zcopy": true, 00:15:07.508 "get_zone_info": false, 00:15:07.508 "zone_management": false, 00:15:07.508 "zone_append": false, 00:15:07.508 "compare": false, 00:15:07.508 "compare_and_write": false, 00:15:07.508 "abort": true, 00:15:07.508 "seek_hole": false, 00:15:07.508 "seek_data": false, 00:15:07.508 "copy": true, 00:15:07.508 "nvme_iov_md": false 00:15:07.508 }, 00:15:07.508 "memory_domains": [ 00:15:07.508 { 00:15:07.508 "dma_device_id": "system", 00:15:07.508 "dma_device_type": 1 00:15:07.508 }, 00:15:07.508 { 00:15:07.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.508 "dma_device_type": 2 00:15:07.508 } 00:15:07.508 ], 00:15:07.508 "driver_specific": {} 00:15:07.508 }' 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:07.508 06:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.074 "name": "BaseBdev4", 00:15:08.074 "aliases": [ 00:15:08.074 "bf835e3b-48bc-11ef-a06c-59ddad71024c" 00:15:08.074 ], 00:15:08.074 "product_name": "Malloc disk", 00:15:08.074 "block_size": 512, 00:15:08.074 "num_blocks": 65536, 00:15:08.074 "uuid": "bf835e3b-48bc-11ef-a06c-59ddad71024c", 00:15:08.074 "assigned_rate_limits": { 00:15:08.074 "rw_ios_per_sec": 0, 00:15:08.074 "rw_mbytes_per_sec": 0, 00:15:08.074 "r_mbytes_per_sec": 0, 00:15:08.074 "w_mbytes_per_sec": 0 00:15:08.074 }, 00:15:08.074 "claimed": true, 00:15:08.074 "claim_type": "exclusive_write", 00:15:08.074 "zoned": false, 00:15:08.074 "supported_io_types": { 00:15:08.074 "read": true, 00:15:08.074 "write": true, 00:15:08.074 "unmap": true, 00:15:08.074 "flush": true, 00:15:08.074 "reset": true, 00:15:08.074 "nvme_admin": false, 00:15:08.074 "nvme_io": false, 00:15:08.074 "nvme_io_md": false, 00:15:08.074 "write_zeroes": true, 00:15:08.074 "zcopy": true, 00:15:08.074 "get_zone_info": false, 00:15:08.074 "zone_management": false, 00:15:08.074 "zone_append": false, 00:15:08.074 "compare": false, 00:15:08.074 "compare_and_write": false, 00:15:08.074 "abort": true, 00:15:08.074 "seek_hole": false, 00:15:08.074 "seek_data": false, 00:15:08.074 "copy": true, 00:15:08.074 "nvme_iov_md": false 00:15:08.074 }, 00:15:08.074 "memory_domains": [ 00:15:08.074 { 00:15:08.074 "dma_device_id": "system", 00:15:08.074 "dma_device_type": 1 00:15:08.074 }, 00:15:08.074 { 00:15:08.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.074 "dma_device_type": 2 00:15:08.074 } 00:15:08.074 ], 00:15:08.074 "driver_specific": {} 00:15:08.074 }' 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.074 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:08.074 [2024-07-23 06:28:20.576713] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.074 [2024-07-23 06:28:20.576752] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.074 [2024-07-23 06:28:20.576781] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.333 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.590 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.590 "name": "Existed_Raid", 00:15:08.590 "uuid": "bf836514-48bc-11ef-a06c-59ddad71024c", 00:15:08.590 "strip_size_kb": 64, 00:15:08.590 "state": "offline", 00:15:08.590 "raid_level": "raid0", 00:15:08.590 "superblock": false, 00:15:08.590 "num_base_bdevs": 4, 00:15:08.590 "num_base_bdevs_discovered": 3, 00:15:08.590 "num_base_bdevs_operational": 3, 00:15:08.590 "base_bdevs_list": [ 00:15:08.590 { 00:15:08.590 "name": null, 00:15:08.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.590 "is_configured": false, 00:15:08.590 "data_offset": 0, 00:15:08.590 "data_size": 65536 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev2", 00:15:08.590 "uuid": "bdf11de3-48bc-11ef-a06c-59ddad71024c", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 0, 00:15:08.590 "data_size": 65536 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev3", 00:15:08.590 "uuid": "bebe367f-48bc-11ef-a06c-59ddad71024c", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 0, 00:15:08.590 "data_size": 65536 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev4", 00:15:08.590 "uuid": "bf835e3b-48bc-11ef-a06c-59ddad71024c", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 0, 00:15:08.590 "data_size": 65536 00:15:08.590 } 00:15:08.590 ] 00:15:08.590 }' 00:15:08.590 06:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.590 06:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.848 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:08.848 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:08.848 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.848 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:09.106 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:09.106 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.106 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:09.365 [2024-07-23 06:28:21.687106] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.365 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:09.365 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:09.365 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.365 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:09.623 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:09.623 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.623 06:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:09.882 [2024-07-23 06:28:22.189118] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.882 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:09.882 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:09.882 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.882 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:10.141 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:10.141 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:10.141 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:10.399 [2024-07-23 06:28:22.763556] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:10.399 [2024-07-23 06:28:22.763588] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x29176ea34a00 name Existed_Raid, state offline 00:15:10.399 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:10.399 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:10.399 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.399 06:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:10.658 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:10.658 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:10.658 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:10.658 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:10.658 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:10.658 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:10.916 BaseBdev2 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.916 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.174 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.433 [ 00:15:11.433 { 00:15:11.433 "name": "BaseBdev2", 00:15:11.433 "aliases": [ 00:15:11.433 "c2ea0134-48bc-11ef-a06c-59ddad71024c" 00:15:11.433 ], 00:15:11.433 "product_name": "Malloc disk", 00:15:11.433 "block_size": 512, 00:15:11.433 "num_blocks": 65536, 00:15:11.433 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:11.433 "assigned_rate_limits": { 00:15:11.433 "rw_ios_per_sec": 0, 00:15:11.433 "rw_mbytes_per_sec": 0, 00:15:11.433 "r_mbytes_per_sec": 0, 00:15:11.433 "w_mbytes_per_sec": 0 00:15:11.433 }, 00:15:11.433 "claimed": false, 00:15:11.433 "zoned": false, 00:15:11.433 "supported_io_types": { 00:15:11.433 "read": true, 00:15:11.433 "write": true, 00:15:11.433 "unmap": true, 00:15:11.433 "flush": true, 00:15:11.433 "reset": true, 00:15:11.433 "nvme_admin": false, 00:15:11.433 "nvme_io": false, 00:15:11.433 "nvme_io_md": false, 00:15:11.433 "write_zeroes": true, 00:15:11.433 "zcopy": true, 00:15:11.433 "get_zone_info": false, 00:15:11.433 "zone_management": false, 00:15:11.433 "zone_append": false, 00:15:11.433 "compare": false, 00:15:11.433 "compare_and_write": false, 00:15:11.433 "abort": true, 00:15:11.433 "seek_hole": false, 00:15:11.433 "seek_data": false, 00:15:11.433 "copy": true, 00:15:11.433 "nvme_iov_md": false 00:15:11.433 }, 00:15:11.433 "memory_domains": [ 00:15:11.433 { 00:15:11.433 "dma_device_id": "system", 00:15:11.433 "dma_device_type": 1 00:15:11.433 }, 00:15:11.433 { 00:15:11.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.433 "dma_device_type": 2 00:15:11.433 } 00:15:11.433 ], 00:15:11.433 "driver_specific": {} 00:15:11.433 } 00:15:11.433 ] 00:15:11.433 06:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:11.433 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:11.433 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:11.433 06:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:11.692 BaseBdev3 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.692 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.974 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.232 [ 00:15:12.232 { 00:15:12.232 "name": "BaseBdev3", 00:15:12.232 "aliases": [ 00:15:12.232 "c3606d82-48bc-11ef-a06c-59ddad71024c" 00:15:12.232 ], 00:15:12.233 "product_name": "Malloc disk", 00:15:12.233 "block_size": 512, 00:15:12.233 "num_blocks": 65536, 00:15:12.233 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:12.233 "assigned_rate_limits": { 00:15:12.233 "rw_ios_per_sec": 0, 00:15:12.233 "rw_mbytes_per_sec": 0, 00:15:12.233 "r_mbytes_per_sec": 0, 00:15:12.233 "w_mbytes_per_sec": 0 00:15:12.233 }, 00:15:12.233 "claimed": false, 00:15:12.233 "zoned": false, 00:15:12.233 "supported_io_types": { 00:15:12.233 "read": true, 00:15:12.233 "write": true, 00:15:12.233 "unmap": true, 00:15:12.233 "flush": true, 00:15:12.233 "reset": true, 00:15:12.233 "nvme_admin": false, 00:15:12.233 "nvme_io": false, 00:15:12.233 "nvme_io_md": false, 00:15:12.233 "write_zeroes": true, 00:15:12.233 "zcopy": true, 00:15:12.233 "get_zone_info": false, 00:15:12.233 "zone_management": false, 00:15:12.233 "zone_append": false, 00:15:12.233 "compare": false, 00:15:12.233 "compare_and_write": false, 00:15:12.233 "abort": true, 00:15:12.233 "seek_hole": false, 00:15:12.233 "seek_data": false, 00:15:12.233 "copy": true, 00:15:12.233 "nvme_iov_md": false 00:15:12.233 }, 00:15:12.233 "memory_domains": [ 00:15:12.233 { 00:15:12.233 "dma_device_id": "system", 00:15:12.233 "dma_device_type": 1 00:15:12.233 }, 00:15:12.233 { 00:15:12.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.233 "dma_device_type": 2 00:15:12.233 } 00:15:12.233 ], 00:15:12.233 "driver_specific": {} 00:15:12.233 } 00:15:12.233 ] 00:15:12.233 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:12.233 06:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:12.233 06:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:12.233 06:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:12.491 BaseBdev4 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.491 06:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.749 06:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:13.008 [ 00:15:13.008 { 00:15:13.008 "name": "BaseBdev4", 00:15:13.008 "aliases": [ 00:15:13.008 "c3dbb900-48bc-11ef-a06c-59ddad71024c" 00:15:13.008 ], 00:15:13.008 "product_name": "Malloc disk", 00:15:13.008 "block_size": 512, 00:15:13.008 "num_blocks": 65536, 00:15:13.008 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:13.008 "assigned_rate_limits": { 00:15:13.008 "rw_ios_per_sec": 0, 00:15:13.008 "rw_mbytes_per_sec": 0, 00:15:13.008 "r_mbytes_per_sec": 0, 00:15:13.008 "w_mbytes_per_sec": 0 00:15:13.008 }, 00:15:13.008 "claimed": false, 00:15:13.008 "zoned": false, 00:15:13.008 "supported_io_types": { 00:15:13.008 "read": true, 00:15:13.008 "write": true, 00:15:13.008 "unmap": true, 00:15:13.008 "flush": true, 00:15:13.008 "reset": true, 00:15:13.008 "nvme_admin": false, 00:15:13.008 "nvme_io": false, 00:15:13.008 "nvme_io_md": false, 00:15:13.008 "write_zeroes": true, 00:15:13.008 "zcopy": true, 00:15:13.008 "get_zone_info": false, 00:15:13.008 "zone_management": false, 00:15:13.008 "zone_append": false, 00:15:13.008 "compare": false, 00:15:13.008 "compare_and_write": false, 00:15:13.008 "abort": true, 00:15:13.008 "seek_hole": false, 00:15:13.008 "seek_data": false, 00:15:13.008 "copy": true, 00:15:13.008 "nvme_iov_md": false 00:15:13.008 }, 00:15:13.008 "memory_domains": [ 00:15:13.008 { 00:15:13.008 "dma_device_id": "system", 00:15:13.008 "dma_device_type": 1 00:15:13.008 }, 00:15:13.008 { 00:15:13.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.008 "dma_device_type": 2 00:15:13.008 } 00:15:13.008 ], 00:15:13.008 "driver_specific": {} 00:15:13.008 } 00:15:13.008 ] 00:15:13.008 06:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:13.008 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:13.008 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:13.008 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:13.267 [2024-07-23 06:28:25.598437] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.267 [2024-07-23 06:28:25.598504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.267 [2024-07-23 06:28:25.598531] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.267 [2024-07-23 06:28:25.599087] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.267 [2024-07-23 06:28:25.599105] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.267 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.525 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.525 "name": "Existed_Raid", 00:15:13.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.525 "strip_size_kb": 64, 00:15:13.525 "state": "configuring", 00:15:13.525 "raid_level": "raid0", 00:15:13.525 "superblock": false, 00:15:13.525 "num_base_bdevs": 4, 00:15:13.525 "num_base_bdevs_discovered": 3, 00:15:13.525 "num_base_bdevs_operational": 4, 00:15:13.525 "base_bdevs_list": [ 00:15:13.525 { 00:15:13.525 "name": "BaseBdev1", 00:15:13.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.525 "is_configured": false, 00:15:13.525 "data_offset": 0, 00:15:13.525 "data_size": 0 00:15:13.525 }, 00:15:13.525 { 00:15:13.525 "name": "BaseBdev2", 00:15:13.525 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:13.525 "is_configured": true, 00:15:13.525 "data_offset": 0, 00:15:13.525 "data_size": 65536 00:15:13.525 }, 00:15:13.525 { 00:15:13.525 "name": "BaseBdev3", 00:15:13.525 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:13.525 "is_configured": true, 00:15:13.525 "data_offset": 0, 00:15:13.525 "data_size": 65536 00:15:13.525 }, 00:15:13.525 { 00:15:13.525 "name": "BaseBdev4", 00:15:13.525 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:13.525 "is_configured": true, 00:15:13.525 "data_offset": 0, 00:15:13.525 "data_size": 65536 00:15:13.525 } 00:15:13.525 ] 00:15:13.525 }' 00:15:13.525 06:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.525 06:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.783 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:14.042 [2024-07-23 06:28:26.410490] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.042 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.300 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.300 "name": "Existed_Raid", 00:15:14.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.300 "strip_size_kb": 64, 00:15:14.300 "state": "configuring", 00:15:14.300 "raid_level": "raid0", 00:15:14.300 "superblock": false, 00:15:14.300 "num_base_bdevs": 4, 00:15:14.300 "num_base_bdevs_discovered": 2, 00:15:14.300 "num_base_bdevs_operational": 4, 00:15:14.300 "base_bdevs_list": [ 00:15:14.300 { 00:15:14.300 "name": "BaseBdev1", 00:15:14.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.300 "is_configured": false, 00:15:14.300 "data_offset": 0, 00:15:14.300 "data_size": 0 00:15:14.300 }, 00:15:14.300 { 00:15:14.300 "name": null, 00:15:14.300 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:14.300 "is_configured": false, 00:15:14.300 "data_offset": 0, 00:15:14.300 "data_size": 65536 00:15:14.300 }, 00:15:14.300 { 00:15:14.300 "name": "BaseBdev3", 00:15:14.300 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:14.300 "is_configured": true, 00:15:14.300 "data_offset": 0, 00:15:14.300 "data_size": 65536 00:15:14.300 }, 00:15:14.300 { 00:15:14.300 "name": "BaseBdev4", 00:15:14.300 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:14.300 "is_configured": true, 00:15:14.300 "data_offset": 0, 00:15:14.300 "data_size": 65536 00:15:14.300 } 00:15:14.300 ] 00:15:14.300 }' 00:15:14.300 06:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.300 06:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 06:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.559 06:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:14.817 06:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:14.817 06:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.075 [2024-07-23 06:28:27.582718] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.075 BaseBdev1 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.350 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.609 06:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.609 [ 00:15:15.609 { 00:15:15.609 "name": "BaseBdev1", 00:15:15.609 "aliases": [ 00:15:15.609 "c5785bd7-48bc-11ef-a06c-59ddad71024c" 00:15:15.609 ], 00:15:15.609 "product_name": "Malloc disk", 00:15:15.609 "block_size": 512, 00:15:15.609 "num_blocks": 65536, 00:15:15.609 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:15.609 "assigned_rate_limits": { 00:15:15.609 "rw_ios_per_sec": 0, 00:15:15.609 "rw_mbytes_per_sec": 0, 00:15:15.609 "r_mbytes_per_sec": 0, 00:15:15.609 "w_mbytes_per_sec": 0 00:15:15.609 }, 00:15:15.609 "claimed": true, 00:15:15.609 "claim_type": "exclusive_write", 00:15:15.609 "zoned": false, 00:15:15.609 "supported_io_types": { 00:15:15.609 "read": true, 00:15:15.609 "write": true, 00:15:15.609 "unmap": true, 00:15:15.609 "flush": true, 00:15:15.609 "reset": true, 00:15:15.609 "nvme_admin": false, 00:15:15.609 "nvme_io": false, 00:15:15.609 "nvme_io_md": false, 00:15:15.609 "write_zeroes": true, 00:15:15.609 "zcopy": true, 00:15:15.609 "get_zone_info": false, 00:15:15.609 "zone_management": false, 00:15:15.609 "zone_append": false, 00:15:15.609 "compare": false, 00:15:15.609 "compare_and_write": false, 00:15:15.609 "abort": true, 00:15:15.609 "seek_hole": false, 00:15:15.609 "seek_data": false, 00:15:15.609 "copy": true, 00:15:15.609 "nvme_iov_md": false 00:15:15.609 }, 00:15:15.609 "memory_domains": [ 00:15:15.609 { 00:15:15.609 "dma_device_id": "system", 00:15:15.609 "dma_device_type": 1 00:15:15.609 }, 00:15:15.609 { 00:15:15.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.609 "dma_device_type": 2 00:15:15.609 } 00:15:15.609 ], 00:15:15.609 "driver_specific": {} 00:15:15.609 } 00:15:15.609 ] 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.867 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.867 "name": "Existed_Raid", 00:15:15.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.867 "strip_size_kb": 64, 00:15:15.867 "state": "configuring", 00:15:15.867 "raid_level": "raid0", 00:15:15.867 "superblock": false, 00:15:15.867 "num_base_bdevs": 4, 00:15:15.867 "num_base_bdevs_discovered": 3, 00:15:15.868 "num_base_bdevs_operational": 4, 00:15:15.868 "base_bdevs_list": [ 00:15:15.868 { 00:15:15.868 "name": "BaseBdev1", 00:15:15.868 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:15.868 "is_configured": true, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 65536 00:15:15.868 }, 00:15:15.868 { 00:15:15.868 "name": null, 00:15:15.868 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:15.868 "is_configured": false, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 65536 00:15:15.868 }, 00:15:15.868 { 00:15:15.868 "name": "BaseBdev3", 00:15:15.868 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:15.868 "is_configured": true, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 65536 00:15:15.868 }, 00:15:15.868 { 00:15:15.868 "name": "BaseBdev4", 00:15:15.868 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:15.868 "is_configured": true, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 65536 00:15:15.868 } 00:15:15.868 ] 00:15:15.868 }' 00:15:15.868 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.868 06:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.433 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.433 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:16.692 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:16.692 06:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:16.692 [2024-07-23 06:28:29.202670] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.950 "name": "Existed_Raid", 00:15:16.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.950 "strip_size_kb": 64, 00:15:16.950 "state": "configuring", 00:15:16.950 "raid_level": "raid0", 00:15:16.950 "superblock": false, 00:15:16.950 "num_base_bdevs": 4, 00:15:16.950 "num_base_bdevs_discovered": 2, 00:15:16.950 "num_base_bdevs_operational": 4, 00:15:16.950 "base_bdevs_list": [ 00:15:16.950 { 00:15:16.950 "name": "BaseBdev1", 00:15:16.950 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:16.950 "is_configured": true, 00:15:16.950 "data_offset": 0, 00:15:16.950 "data_size": 65536 00:15:16.950 }, 00:15:16.950 { 00:15:16.950 "name": null, 00:15:16.950 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:16.950 "is_configured": false, 00:15:16.950 "data_offset": 0, 00:15:16.950 "data_size": 65536 00:15:16.950 }, 00:15:16.950 { 00:15:16.950 "name": null, 00:15:16.950 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:16.950 "is_configured": false, 00:15:16.950 "data_offset": 0, 00:15:16.950 "data_size": 65536 00:15:16.950 }, 00:15:16.950 { 00:15:16.950 "name": "BaseBdev4", 00:15:16.950 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:16.950 "is_configured": true, 00:15:16.950 "data_offset": 0, 00:15:16.950 "data_size": 65536 00:15:16.950 } 00:15:16.950 ] 00:15:16.950 }' 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.950 06:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.571 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:17.571 06:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.571 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:17.571 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:17.829 [2024-07-23 06:28:30.282769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.829 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.087 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.087 "name": "Existed_Raid", 00:15:18.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.087 "strip_size_kb": 64, 00:15:18.087 "state": "configuring", 00:15:18.087 "raid_level": "raid0", 00:15:18.087 "superblock": false, 00:15:18.087 "num_base_bdevs": 4, 00:15:18.087 "num_base_bdevs_discovered": 3, 00:15:18.087 "num_base_bdevs_operational": 4, 00:15:18.087 "base_bdevs_list": [ 00:15:18.087 { 00:15:18.087 "name": "BaseBdev1", 00:15:18.087 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:18.087 "is_configured": true, 00:15:18.087 "data_offset": 0, 00:15:18.087 "data_size": 65536 00:15:18.087 }, 00:15:18.087 { 00:15:18.087 "name": null, 00:15:18.087 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:18.087 "is_configured": false, 00:15:18.087 "data_offset": 0, 00:15:18.087 "data_size": 65536 00:15:18.087 }, 00:15:18.087 { 00:15:18.087 "name": "BaseBdev3", 00:15:18.087 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:18.087 "is_configured": true, 00:15:18.087 "data_offset": 0, 00:15:18.087 "data_size": 65536 00:15:18.087 }, 00:15:18.087 { 00:15:18.087 "name": "BaseBdev4", 00:15:18.087 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:18.087 "is_configured": true, 00:15:18.087 "data_offset": 0, 00:15:18.087 "data_size": 65536 00:15:18.087 } 00:15:18.087 ] 00:15:18.087 }' 00:15:18.087 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.087 06:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.345 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.345 06:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:18.603 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:18.603 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.860 [2024-07-23 06:28:31.374901] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.119 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.377 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.377 "name": "Existed_Raid", 00:15:19.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.377 "strip_size_kb": 64, 00:15:19.377 "state": "configuring", 00:15:19.377 "raid_level": "raid0", 00:15:19.377 "superblock": false, 00:15:19.377 "num_base_bdevs": 4, 00:15:19.377 "num_base_bdevs_discovered": 2, 00:15:19.377 "num_base_bdevs_operational": 4, 00:15:19.377 "base_bdevs_list": [ 00:15:19.377 { 00:15:19.377 "name": null, 00:15:19.377 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:19.377 "is_configured": false, 00:15:19.377 "data_offset": 0, 00:15:19.377 "data_size": 65536 00:15:19.377 }, 00:15:19.377 { 00:15:19.377 "name": null, 00:15:19.377 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:19.377 "is_configured": false, 00:15:19.377 "data_offset": 0, 00:15:19.377 "data_size": 65536 00:15:19.377 }, 00:15:19.377 { 00:15:19.377 "name": "BaseBdev3", 00:15:19.377 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:19.377 "is_configured": true, 00:15:19.377 "data_offset": 0, 00:15:19.377 "data_size": 65536 00:15:19.377 }, 00:15:19.377 { 00:15:19.377 "name": "BaseBdev4", 00:15:19.377 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:19.377 "is_configured": true, 00:15:19.377 "data_offset": 0, 00:15:19.377 "data_size": 65536 00:15:19.377 } 00:15:19.377 ] 00:15:19.377 }' 00:15:19.377 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.377 06:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.636 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.636 06:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.895 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:19.895 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:20.153 [2024-07-23 06:28:32.485250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.153 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:20.153 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.154 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.412 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.412 "name": "Existed_Raid", 00:15:20.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.412 "strip_size_kb": 64, 00:15:20.412 "state": "configuring", 00:15:20.412 "raid_level": "raid0", 00:15:20.412 "superblock": false, 00:15:20.412 "num_base_bdevs": 4, 00:15:20.412 "num_base_bdevs_discovered": 3, 00:15:20.412 "num_base_bdevs_operational": 4, 00:15:20.412 "base_bdevs_list": [ 00:15:20.412 { 00:15:20.412 "name": null, 00:15:20.412 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:20.412 "is_configured": false, 00:15:20.412 "data_offset": 0, 00:15:20.412 "data_size": 65536 00:15:20.412 }, 00:15:20.412 { 00:15:20.412 "name": "BaseBdev2", 00:15:20.412 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:20.412 "is_configured": true, 00:15:20.412 "data_offset": 0, 00:15:20.412 "data_size": 65536 00:15:20.412 }, 00:15:20.412 { 00:15:20.412 "name": "BaseBdev3", 00:15:20.412 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:20.412 "is_configured": true, 00:15:20.412 "data_offset": 0, 00:15:20.412 "data_size": 65536 00:15:20.412 }, 00:15:20.412 { 00:15:20.412 "name": "BaseBdev4", 00:15:20.412 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:20.412 "is_configured": true, 00:15:20.412 "data_offset": 0, 00:15:20.412 "data_size": 65536 00:15:20.412 } 00:15:20.412 ] 00:15:20.412 }' 00:15:20.412 06:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.412 06:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.672 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.672 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:20.930 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:20.930 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.930 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:21.187 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c5785bd7-48bc-11ef-a06c-59ddad71024c 00:15:21.445 [2024-07-23 06:28:33.809523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:21.445 [2024-07-23 06:28:33.809550] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x29176ea34f00 00:15:21.445 [2024-07-23 06:28:33.809555] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:21.445 [2024-07-23 06:28:33.809578] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x29176ea97e20 00:15:21.445 [2024-07-23 06:28:33.809648] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x29176ea34f00 00:15:21.445 [2024-07-23 06:28:33.809653] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x29176ea34f00 00:15:21.445 [2024-07-23 06:28:33.809685] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.445 NewBaseBdev 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:21.445 06:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.703 06:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:21.961 [ 00:15:21.961 { 00:15:21.961 "name": "NewBaseBdev", 00:15:21.961 "aliases": [ 00:15:21.961 "c5785bd7-48bc-11ef-a06c-59ddad71024c" 00:15:21.961 ], 00:15:21.961 "product_name": "Malloc disk", 00:15:21.961 "block_size": 512, 00:15:21.961 "num_blocks": 65536, 00:15:21.961 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:21.961 "assigned_rate_limits": { 00:15:21.961 "rw_ios_per_sec": 0, 00:15:21.961 "rw_mbytes_per_sec": 0, 00:15:21.961 "r_mbytes_per_sec": 0, 00:15:21.961 "w_mbytes_per_sec": 0 00:15:21.961 }, 00:15:21.961 "claimed": true, 00:15:21.961 "claim_type": "exclusive_write", 00:15:21.961 "zoned": false, 00:15:21.961 "supported_io_types": { 00:15:21.961 "read": true, 00:15:21.961 "write": true, 00:15:21.961 "unmap": true, 00:15:21.961 "flush": true, 00:15:21.961 "reset": true, 00:15:21.961 "nvme_admin": false, 00:15:21.961 "nvme_io": false, 00:15:21.961 "nvme_io_md": false, 00:15:21.961 "write_zeroes": true, 00:15:21.961 "zcopy": true, 00:15:21.961 "get_zone_info": false, 00:15:21.961 "zone_management": false, 00:15:21.961 "zone_append": false, 00:15:21.961 "compare": false, 00:15:21.961 "compare_and_write": false, 00:15:21.961 "abort": true, 00:15:21.961 "seek_hole": false, 00:15:21.961 "seek_data": false, 00:15:21.961 "copy": true, 00:15:21.961 "nvme_iov_md": false 00:15:21.961 }, 00:15:21.961 "memory_domains": [ 00:15:21.961 { 00:15:21.961 "dma_device_id": "system", 00:15:21.961 "dma_device_type": 1 00:15:21.961 }, 00:15:21.961 { 00:15:21.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.961 "dma_device_type": 2 00:15:21.961 } 00:15:21.961 ], 00:15:21.961 "driver_specific": {} 00:15:21.961 } 00:15:21.961 ] 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.961 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.219 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.219 "name": "Existed_Raid", 00:15:22.219 "uuid": "c92e8602-48bc-11ef-a06c-59ddad71024c", 00:15:22.219 "strip_size_kb": 64, 00:15:22.219 "state": "online", 00:15:22.219 "raid_level": "raid0", 00:15:22.219 "superblock": false, 00:15:22.219 "num_base_bdevs": 4, 00:15:22.219 "num_base_bdevs_discovered": 4, 00:15:22.219 "num_base_bdevs_operational": 4, 00:15:22.219 "base_bdevs_list": [ 00:15:22.219 { 00:15:22.219 "name": "NewBaseBdev", 00:15:22.219 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:22.219 "is_configured": true, 00:15:22.219 "data_offset": 0, 00:15:22.219 "data_size": 65536 00:15:22.219 }, 00:15:22.219 { 00:15:22.219 "name": "BaseBdev2", 00:15:22.219 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:22.219 "is_configured": true, 00:15:22.219 "data_offset": 0, 00:15:22.219 "data_size": 65536 00:15:22.219 }, 00:15:22.219 { 00:15:22.219 "name": "BaseBdev3", 00:15:22.219 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:22.219 "is_configured": true, 00:15:22.219 "data_offset": 0, 00:15:22.219 "data_size": 65536 00:15:22.219 }, 00:15:22.219 { 00:15:22.219 "name": "BaseBdev4", 00:15:22.219 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:22.219 "is_configured": true, 00:15:22.219 "data_offset": 0, 00:15:22.219 "data_size": 65536 00:15:22.219 } 00:15:22.219 ] 00:15:22.219 }' 00:15:22.219 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.219 06:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:22.476 06:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:22.735 [2024-07-23 06:28:35.149505] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.735 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:22.735 "name": "Existed_Raid", 00:15:22.735 "aliases": [ 00:15:22.735 "c92e8602-48bc-11ef-a06c-59ddad71024c" 00:15:22.735 ], 00:15:22.735 "product_name": "Raid Volume", 00:15:22.735 "block_size": 512, 00:15:22.735 "num_blocks": 262144, 00:15:22.735 "uuid": "c92e8602-48bc-11ef-a06c-59ddad71024c", 00:15:22.735 "assigned_rate_limits": { 00:15:22.735 "rw_ios_per_sec": 0, 00:15:22.735 "rw_mbytes_per_sec": 0, 00:15:22.735 "r_mbytes_per_sec": 0, 00:15:22.735 "w_mbytes_per_sec": 0 00:15:22.735 }, 00:15:22.735 "claimed": false, 00:15:22.735 "zoned": false, 00:15:22.735 "supported_io_types": { 00:15:22.735 "read": true, 00:15:22.735 "write": true, 00:15:22.735 "unmap": true, 00:15:22.735 "flush": true, 00:15:22.735 "reset": true, 00:15:22.735 "nvme_admin": false, 00:15:22.735 "nvme_io": false, 00:15:22.735 "nvme_io_md": false, 00:15:22.735 "write_zeroes": true, 00:15:22.735 "zcopy": false, 00:15:22.735 "get_zone_info": false, 00:15:22.735 "zone_management": false, 00:15:22.735 "zone_append": false, 00:15:22.735 "compare": false, 00:15:22.735 "compare_and_write": false, 00:15:22.735 "abort": false, 00:15:22.735 "seek_hole": false, 00:15:22.735 "seek_data": false, 00:15:22.735 "copy": false, 00:15:22.735 "nvme_iov_md": false 00:15:22.735 }, 00:15:22.735 "memory_domains": [ 00:15:22.735 { 00:15:22.735 "dma_device_id": "system", 00:15:22.735 "dma_device_type": 1 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.735 "dma_device_type": 2 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "system", 00:15:22.735 "dma_device_type": 1 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.735 "dma_device_type": 2 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "system", 00:15:22.735 "dma_device_type": 1 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.735 "dma_device_type": 2 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "system", 00:15:22.735 "dma_device_type": 1 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.735 "dma_device_type": 2 00:15:22.735 } 00:15:22.735 ], 00:15:22.735 "driver_specific": { 00:15:22.735 "raid": { 00:15:22.735 "uuid": "c92e8602-48bc-11ef-a06c-59ddad71024c", 00:15:22.735 "strip_size_kb": 64, 00:15:22.735 "state": "online", 00:15:22.735 "raid_level": "raid0", 00:15:22.735 "superblock": false, 00:15:22.735 "num_base_bdevs": 4, 00:15:22.735 "num_base_bdevs_discovered": 4, 00:15:22.735 "num_base_bdevs_operational": 4, 00:15:22.735 "base_bdevs_list": [ 00:15:22.735 { 00:15:22.735 "name": "NewBaseBdev", 00:15:22.735 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:22.735 "is_configured": true, 00:15:22.735 "data_offset": 0, 00:15:22.735 "data_size": 65536 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "name": "BaseBdev2", 00:15:22.735 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:22.735 "is_configured": true, 00:15:22.735 "data_offset": 0, 00:15:22.735 "data_size": 65536 00:15:22.735 }, 00:15:22.735 { 00:15:22.735 "name": "BaseBdev3", 00:15:22.735 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:22.735 "is_configured": true, 00:15:22.736 "data_offset": 0, 00:15:22.736 "data_size": 65536 00:15:22.736 }, 00:15:22.736 { 00:15:22.736 "name": "BaseBdev4", 00:15:22.736 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:22.736 "is_configured": true, 00:15:22.736 "data_offset": 0, 00:15:22.736 "data_size": 65536 00:15:22.736 } 00:15:22.736 ] 00:15:22.736 } 00:15:22.736 } 00:15:22.736 }' 00:15:22.736 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.736 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:22.736 BaseBdev2 00:15:22.736 BaseBdev3 00:15:22.736 BaseBdev4' 00:15:22.736 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:22.736 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:22.736 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:22.994 "name": "NewBaseBdev", 00:15:22.994 "aliases": [ 00:15:22.994 "c5785bd7-48bc-11ef-a06c-59ddad71024c" 00:15:22.994 ], 00:15:22.994 "product_name": "Malloc disk", 00:15:22.994 "block_size": 512, 00:15:22.994 "num_blocks": 65536, 00:15:22.994 "uuid": "c5785bd7-48bc-11ef-a06c-59ddad71024c", 00:15:22.994 "assigned_rate_limits": { 00:15:22.994 "rw_ios_per_sec": 0, 00:15:22.994 "rw_mbytes_per_sec": 0, 00:15:22.994 "r_mbytes_per_sec": 0, 00:15:22.994 "w_mbytes_per_sec": 0 00:15:22.994 }, 00:15:22.994 "claimed": true, 00:15:22.994 "claim_type": "exclusive_write", 00:15:22.994 "zoned": false, 00:15:22.994 "supported_io_types": { 00:15:22.994 "read": true, 00:15:22.994 "write": true, 00:15:22.994 "unmap": true, 00:15:22.994 "flush": true, 00:15:22.994 "reset": true, 00:15:22.994 "nvme_admin": false, 00:15:22.994 "nvme_io": false, 00:15:22.994 "nvme_io_md": false, 00:15:22.994 "write_zeroes": true, 00:15:22.994 "zcopy": true, 00:15:22.994 "get_zone_info": false, 00:15:22.994 "zone_management": false, 00:15:22.994 "zone_append": false, 00:15:22.994 "compare": false, 00:15:22.994 "compare_and_write": false, 00:15:22.994 "abort": true, 00:15:22.994 "seek_hole": false, 00:15:22.994 "seek_data": false, 00:15:22.994 "copy": true, 00:15:22.994 "nvme_iov_md": false 00:15:22.994 }, 00:15:22.994 "memory_domains": [ 00:15:22.994 { 00:15:22.994 "dma_device_id": "system", 00:15:22.994 "dma_device_type": 1 00:15:22.994 }, 00:15:22.994 { 00:15:22.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.994 "dma_device_type": 2 00:15:22.994 } 00:15:22.994 ], 00:15:22.994 "driver_specific": {} 00:15:22.994 }' 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:22.994 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:23.253 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:23.253 "name": "BaseBdev2", 00:15:23.253 "aliases": [ 00:15:23.253 "c2ea0134-48bc-11ef-a06c-59ddad71024c" 00:15:23.253 ], 00:15:23.253 "product_name": "Malloc disk", 00:15:23.253 "block_size": 512, 00:15:23.253 "num_blocks": 65536, 00:15:23.253 "uuid": "c2ea0134-48bc-11ef-a06c-59ddad71024c", 00:15:23.253 "assigned_rate_limits": { 00:15:23.253 "rw_ios_per_sec": 0, 00:15:23.253 "rw_mbytes_per_sec": 0, 00:15:23.253 "r_mbytes_per_sec": 0, 00:15:23.253 "w_mbytes_per_sec": 0 00:15:23.253 }, 00:15:23.253 "claimed": true, 00:15:23.253 "claim_type": "exclusive_write", 00:15:23.253 "zoned": false, 00:15:23.253 "supported_io_types": { 00:15:23.253 "read": true, 00:15:23.253 "write": true, 00:15:23.253 "unmap": true, 00:15:23.253 "flush": true, 00:15:23.253 "reset": true, 00:15:23.253 "nvme_admin": false, 00:15:23.253 "nvme_io": false, 00:15:23.253 "nvme_io_md": false, 00:15:23.253 "write_zeroes": true, 00:15:23.253 "zcopy": true, 00:15:23.253 "get_zone_info": false, 00:15:23.253 "zone_management": false, 00:15:23.253 "zone_append": false, 00:15:23.253 "compare": false, 00:15:23.253 "compare_and_write": false, 00:15:23.253 "abort": true, 00:15:23.253 "seek_hole": false, 00:15:23.253 "seek_data": false, 00:15:23.253 "copy": true, 00:15:23.253 "nvme_iov_md": false 00:15:23.253 }, 00:15:23.253 "memory_domains": [ 00:15:23.253 { 00:15:23.253 "dma_device_id": "system", 00:15:23.253 "dma_device_type": 1 00:15:23.253 }, 00:15:23.253 { 00:15:23.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.253 "dma_device_type": 2 00:15:23.253 } 00:15:23.253 ], 00:15:23.253 "driver_specific": {} 00:15:23.253 }' 00:15:23.253 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.253 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:23.511 06:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:23.769 "name": "BaseBdev3", 00:15:23.769 "aliases": [ 00:15:23.769 "c3606d82-48bc-11ef-a06c-59ddad71024c" 00:15:23.769 ], 00:15:23.769 "product_name": "Malloc disk", 00:15:23.769 "block_size": 512, 00:15:23.769 "num_blocks": 65536, 00:15:23.769 "uuid": "c3606d82-48bc-11ef-a06c-59ddad71024c", 00:15:23.769 "assigned_rate_limits": { 00:15:23.769 "rw_ios_per_sec": 0, 00:15:23.769 "rw_mbytes_per_sec": 0, 00:15:23.769 "r_mbytes_per_sec": 0, 00:15:23.769 "w_mbytes_per_sec": 0 00:15:23.769 }, 00:15:23.769 "claimed": true, 00:15:23.769 "claim_type": "exclusive_write", 00:15:23.769 "zoned": false, 00:15:23.769 "supported_io_types": { 00:15:23.769 "read": true, 00:15:23.769 "write": true, 00:15:23.769 "unmap": true, 00:15:23.769 "flush": true, 00:15:23.769 "reset": true, 00:15:23.769 "nvme_admin": false, 00:15:23.769 "nvme_io": false, 00:15:23.769 "nvme_io_md": false, 00:15:23.769 "write_zeroes": true, 00:15:23.769 "zcopy": true, 00:15:23.769 "get_zone_info": false, 00:15:23.769 "zone_management": false, 00:15:23.769 "zone_append": false, 00:15:23.769 "compare": false, 00:15:23.769 "compare_and_write": false, 00:15:23.769 "abort": true, 00:15:23.769 "seek_hole": false, 00:15:23.769 "seek_data": false, 00:15:23.769 "copy": true, 00:15:23.769 "nvme_iov_md": false 00:15:23.769 }, 00:15:23.769 "memory_domains": [ 00:15:23.769 { 00:15:23.769 "dma_device_id": "system", 00:15:23.769 "dma_device_type": 1 00:15:23.769 }, 00:15:23.769 { 00:15:23.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.769 "dma_device_type": 2 00:15:23.769 } 00:15:23.769 ], 00:15:23.769 "driver_specific": {} 00:15:23.769 }' 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:23.769 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.027 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.027 "name": "BaseBdev4", 00:15:24.027 "aliases": [ 00:15:24.027 "c3dbb900-48bc-11ef-a06c-59ddad71024c" 00:15:24.027 ], 00:15:24.027 "product_name": "Malloc disk", 00:15:24.027 "block_size": 512, 00:15:24.027 "num_blocks": 65536, 00:15:24.027 "uuid": "c3dbb900-48bc-11ef-a06c-59ddad71024c", 00:15:24.027 "assigned_rate_limits": { 00:15:24.027 "rw_ios_per_sec": 0, 00:15:24.027 "rw_mbytes_per_sec": 0, 00:15:24.027 "r_mbytes_per_sec": 0, 00:15:24.027 "w_mbytes_per_sec": 0 00:15:24.027 }, 00:15:24.027 "claimed": true, 00:15:24.027 "claim_type": "exclusive_write", 00:15:24.027 "zoned": false, 00:15:24.027 "supported_io_types": { 00:15:24.027 "read": true, 00:15:24.027 "write": true, 00:15:24.027 "unmap": true, 00:15:24.027 "flush": true, 00:15:24.027 "reset": true, 00:15:24.027 "nvme_admin": false, 00:15:24.027 "nvme_io": false, 00:15:24.027 "nvme_io_md": false, 00:15:24.027 "write_zeroes": true, 00:15:24.027 "zcopy": true, 00:15:24.027 "get_zone_info": false, 00:15:24.027 "zone_management": false, 00:15:24.027 "zone_append": false, 00:15:24.027 "compare": false, 00:15:24.027 "compare_and_write": false, 00:15:24.027 "abort": true, 00:15:24.027 "seek_hole": false, 00:15:24.027 "seek_data": false, 00:15:24.027 "copy": true, 00:15:24.027 "nvme_iov_md": false 00:15:24.027 }, 00:15:24.027 "memory_domains": [ 00:15:24.028 { 00:15:24.028 "dma_device_id": "system", 00:15:24.028 "dma_device_type": 1 00:15:24.028 }, 00:15:24.028 { 00:15:24.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.028 "dma_device_type": 2 00:15:24.028 } 00:15:24.028 ], 00:15:24.028 "driver_specific": {} 00:15:24.028 }' 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:24.028 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.286 [2024-07-23 06:28:36.697518] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.286 [2024-07-23 06:28:36.697542] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.286 [2024-07-23 06:28:36.697580] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.286 [2024-07-23 06:28:36.697595] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.286 [2024-07-23 06:28:36.697599] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x29176ea34f00 name Existed_Raid, state offline 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58423 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58423 ']' 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58423 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58423 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:24.286 killing process with pid 58423 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58423' 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58423 00:15:24.286 [2024-07-23 06:28:36.725649] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.286 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58423 00:15:24.286 [2024-07-23 06:28:36.750313] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:24.592 00:15:24.592 real 0m27.056s 00:15:24.592 user 0m49.547s 00:15:24.592 sys 0m3.719s 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.592 ************************************ 00:15:24.592 END TEST raid_state_function_test 00:15:24.592 ************************************ 00:15:24.592 06:28:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:24.592 06:28:36 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:24.592 06:28:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:24.592 06:28:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.592 06:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.592 ************************************ 00:15:24.592 START TEST raid_state_function_test_sb 00:15:24.592 ************************************ 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59238 00:15:24.592 Process raid pid: 59238 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59238' 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59238 /var/tmp/spdk-raid.sock 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 59238 ']' 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:24.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.592 06:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.592 [2024-07-23 06:28:37.003044] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:24.592 [2024-07-23 06:28:37.003310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:25.159 EAL: TSC is not safe to use in SMP mode 00:15:25.159 EAL: TSC is not invariant 00:15:25.159 [2024-07-23 06:28:37.582557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.159 [2024-07-23 06:28:37.671637] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:25.159 [2024-07-23 06:28:37.673797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.159 [2024-07-23 06:28:37.674601] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.159 [2024-07-23 06:28:37.674616] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.724 06:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.724 06:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:25.724 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:25.982 [2024-07-23 06:28:38.314950] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.982 [2024-07-23 06:28:38.315025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.982 [2024-07-23 06:28:38.315031] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.982 [2024-07-23 06:28:38.315057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.982 [2024-07-23 06:28:38.315060] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.982 [2024-07-23 06:28:38.315068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.982 [2024-07-23 06:28:38.315071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.982 [2024-07-23 06:28:38.315079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.982 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.239 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.239 "name": "Existed_Raid", 00:15:26.239 "uuid": "cbddfd76-48bc-11ef-a06c-59ddad71024c", 00:15:26.239 "strip_size_kb": 64, 00:15:26.239 "state": "configuring", 00:15:26.239 "raid_level": "raid0", 00:15:26.239 "superblock": true, 00:15:26.239 "num_base_bdevs": 4, 00:15:26.239 "num_base_bdevs_discovered": 0, 00:15:26.239 "num_base_bdevs_operational": 4, 00:15:26.239 "base_bdevs_list": [ 00:15:26.239 { 00:15:26.239 "name": "BaseBdev1", 00:15:26.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.239 "is_configured": false, 00:15:26.239 "data_offset": 0, 00:15:26.239 "data_size": 0 00:15:26.239 }, 00:15:26.239 { 00:15:26.239 "name": "BaseBdev2", 00:15:26.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.239 "is_configured": false, 00:15:26.239 "data_offset": 0, 00:15:26.239 "data_size": 0 00:15:26.239 }, 00:15:26.239 { 00:15:26.239 "name": "BaseBdev3", 00:15:26.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.239 "is_configured": false, 00:15:26.239 "data_offset": 0, 00:15:26.239 "data_size": 0 00:15:26.239 }, 00:15:26.239 { 00:15:26.239 "name": "BaseBdev4", 00:15:26.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.239 "is_configured": false, 00:15:26.239 "data_offset": 0, 00:15:26.239 "data_size": 0 00:15:26.239 } 00:15:26.240 ] 00:15:26.240 }' 00:15:26.240 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.240 06:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.497 06:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.754 [2024-07-23 06:28:39.154971] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.754 [2024-07-23 06:28:39.154995] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1539a34500 name Existed_Raid, state configuring 00:15:26.754 06:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:27.056 [2024-07-23 06:28:39.443019] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.056 [2024-07-23 06:28:39.443067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.056 [2024-07-23 06:28:39.443073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.056 [2024-07-23 06:28:39.443083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.056 [2024-07-23 06:28:39.443087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.056 [2024-07-23 06:28:39.443094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.056 [2024-07-23 06:28:39.443098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:27.056 [2024-07-23 06:28:39.443105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:27.056 06:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.319 [2024-07-23 06:28:39.736122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.319 BaseBdev1 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:27.319 06:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.577 06:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.835 [ 00:15:27.835 { 00:15:27.835 "name": "BaseBdev1", 00:15:27.835 "aliases": [ 00:15:27.835 "ccb6af30-48bc-11ef-a06c-59ddad71024c" 00:15:27.835 ], 00:15:27.835 "product_name": "Malloc disk", 00:15:27.835 "block_size": 512, 00:15:27.835 "num_blocks": 65536, 00:15:27.835 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:27.835 "assigned_rate_limits": { 00:15:27.835 "rw_ios_per_sec": 0, 00:15:27.835 "rw_mbytes_per_sec": 0, 00:15:27.835 "r_mbytes_per_sec": 0, 00:15:27.835 "w_mbytes_per_sec": 0 00:15:27.835 }, 00:15:27.835 "claimed": true, 00:15:27.835 "claim_type": "exclusive_write", 00:15:27.835 "zoned": false, 00:15:27.835 "supported_io_types": { 00:15:27.835 "read": true, 00:15:27.835 "write": true, 00:15:27.835 "unmap": true, 00:15:27.835 "flush": true, 00:15:27.835 "reset": true, 00:15:27.835 "nvme_admin": false, 00:15:27.835 "nvme_io": false, 00:15:27.835 "nvme_io_md": false, 00:15:27.835 "write_zeroes": true, 00:15:27.835 "zcopy": true, 00:15:27.835 "get_zone_info": false, 00:15:27.835 "zone_management": false, 00:15:27.835 "zone_append": false, 00:15:27.835 "compare": false, 00:15:27.835 "compare_and_write": false, 00:15:27.835 "abort": true, 00:15:27.835 "seek_hole": false, 00:15:27.835 "seek_data": false, 00:15:27.835 "copy": true, 00:15:27.835 "nvme_iov_md": false 00:15:27.835 }, 00:15:27.835 "memory_domains": [ 00:15:27.835 { 00:15:27.835 "dma_device_id": "system", 00:15:27.835 "dma_device_type": 1 00:15:27.835 }, 00:15:27.835 { 00:15:27.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.835 "dma_device_type": 2 00:15:27.835 } 00:15:27.835 ], 00:15:27.835 "driver_specific": {} 00:15:27.835 } 00:15:27.835 ] 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.835 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.093 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.093 "name": "Existed_Raid", 00:15:28.093 "uuid": "cc8a1ebe-48bc-11ef-a06c-59ddad71024c", 00:15:28.093 "strip_size_kb": 64, 00:15:28.093 "state": "configuring", 00:15:28.093 "raid_level": "raid0", 00:15:28.093 "superblock": true, 00:15:28.093 "num_base_bdevs": 4, 00:15:28.093 "num_base_bdevs_discovered": 1, 00:15:28.093 "num_base_bdevs_operational": 4, 00:15:28.093 "base_bdevs_list": [ 00:15:28.093 { 00:15:28.093 "name": "BaseBdev1", 00:15:28.093 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:28.093 "is_configured": true, 00:15:28.093 "data_offset": 2048, 00:15:28.093 "data_size": 63488 00:15:28.093 }, 00:15:28.093 { 00:15:28.093 "name": "BaseBdev2", 00:15:28.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.093 "is_configured": false, 00:15:28.093 "data_offset": 0, 00:15:28.093 "data_size": 0 00:15:28.093 }, 00:15:28.093 { 00:15:28.093 "name": "BaseBdev3", 00:15:28.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.093 "is_configured": false, 00:15:28.093 "data_offset": 0, 00:15:28.093 "data_size": 0 00:15:28.093 }, 00:15:28.093 { 00:15:28.093 "name": "BaseBdev4", 00:15:28.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.093 "is_configured": false, 00:15:28.093 "data_offset": 0, 00:15:28.093 "data_size": 0 00:15:28.093 } 00:15:28.093 ] 00:15:28.093 }' 00:15:28.093 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.093 06:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.658 06:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.915 [2024-07-23 06:28:41.195129] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.915 [2024-07-23 06:28:41.195166] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1539a34500 name Existed_Raid, state configuring 00:15:28.915 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:29.173 [2024-07-23 06:28:41.471190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.173 [2024-07-23 06:28:41.472103] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.173 [2024-07-23 06:28:41.472203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.173 [2024-07-23 06:28:41.472208] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.173 [2024-07-23 06:28:41.472232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.173 [2024-07-23 06:28:41.472236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:29.173 [2024-07-23 06:28:41.472242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.173 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.431 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.431 "name": "Existed_Raid", 00:15:29.431 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:29.431 "strip_size_kb": 64, 00:15:29.431 "state": "configuring", 00:15:29.431 "raid_level": "raid0", 00:15:29.431 "superblock": true, 00:15:29.431 "num_base_bdevs": 4, 00:15:29.431 "num_base_bdevs_discovered": 1, 00:15:29.431 "num_base_bdevs_operational": 4, 00:15:29.431 "base_bdevs_list": [ 00:15:29.431 { 00:15:29.431 "name": "BaseBdev1", 00:15:29.431 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:29.432 "is_configured": true, 00:15:29.432 "data_offset": 2048, 00:15:29.432 "data_size": 63488 00:15:29.432 }, 00:15:29.432 { 00:15:29.432 "name": "BaseBdev2", 00:15:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.432 "is_configured": false, 00:15:29.432 "data_offset": 0, 00:15:29.432 "data_size": 0 00:15:29.432 }, 00:15:29.432 { 00:15:29.432 "name": "BaseBdev3", 00:15:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.432 "is_configured": false, 00:15:29.432 "data_offset": 0, 00:15:29.432 "data_size": 0 00:15:29.432 }, 00:15:29.432 { 00:15:29.432 "name": "BaseBdev4", 00:15:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.432 "is_configured": false, 00:15:29.432 "data_offset": 0, 00:15:29.432 "data_size": 0 00:15:29.432 } 00:15:29.432 ] 00:15:29.432 }' 00:15:29.432 06:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.432 06:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.745 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.003 [2024-07-23 06:28:42.391365] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.003 BaseBdev2 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.003 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.262 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.519 [ 00:15:30.520 { 00:15:30.520 "name": "BaseBdev2", 00:15:30.520 "aliases": [ 00:15:30.520 "ce4bfbac-48bc-11ef-a06c-59ddad71024c" 00:15:30.520 ], 00:15:30.520 "product_name": "Malloc disk", 00:15:30.520 "block_size": 512, 00:15:30.520 "num_blocks": 65536, 00:15:30.520 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:30.520 "assigned_rate_limits": { 00:15:30.520 "rw_ios_per_sec": 0, 00:15:30.520 "rw_mbytes_per_sec": 0, 00:15:30.520 "r_mbytes_per_sec": 0, 00:15:30.520 "w_mbytes_per_sec": 0 00:15:30.520 }, 00:15:30.520 "claimed": true, 00:15:30.520 "claim_type": "exclusive_write", 00:15:30.520 "zoned": false, 00:15:30.520 "supported_io_types": { 00:15:30.520 "read": true, 00:15:30.520 "write": true, 00:15:30.520 "unmap": true, 00:15:30.520 "flush": true, 00:15:30.520 "reset": true, 00:15:30.520 "nvme_admin": false, 00:15:30.520 "nvme_io": false, 00:15:30.520 "nvme_io_md": false, 00:15:30.520 "write_zeroes": true, 00:15:30.520 "zcopy": true, 00:15:30.520 "get_zone_info": false, 00:15:30.520 "zone_management": false, 00:15:30.520 "zone_append": false, 00:15:30.520 "compare": false, 00:15:30.520 "compare_and_write": false, 00:15:30.520 "abort": true, 00:15:30.520 "seek_hole": false, 00:15:30.520 "seek_data": false, 00:15:30.520 "copy": true, 00:15:30.520 "nvme_iov_md": false 00:15:30.520 }, 00:15:30.520 "memory_domains": [ 00:15:30.520 { 00:15:30.520 "dma_device_id": "system", 00:15:30.520 "dma_device_type": 1 00:15:30.520 }, 00:15:30.520 { 00:15:30.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.520 "dma_device_type": 2 00:15:30.520 } 00:15:30.520 ], 00:15:30.520 "driver_specific": {} 00:15:30.520 } 00:15:30.520 ] 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.520 06:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.779 06:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.779 "name": "Existed_Raid", 00:15:30.779 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:30.779 "strip_size_kb": 64, 00:15:30.779 "state": "configuring", 00:15:30.779 "raid_level": "raid0", 00:15:30.779 "superblock": true, 00:15:30.779 "num_base_bdevs": 4, 00:15:30.779 "num_base_bdevs_discovered": 2, 00:15:30.779 "num_base_bdevs_operational": 4, 00:15:30.779 "base_bdevs_list": [ 00:15:30.779 { 00:15:30.779 "name": "BaseBdev1", 00:15:30.779 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:30.779 "is_configured": true, 00:15:30.779 "data_offset": 2048, 00:15:30.779 "data_size": 63488 00:15:30.779 }, 00:15:30.779 { 00:15:30.779 "name": "BaseBdev2", 00:15:30.779 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:30.779 "is_configured": true, 00:15:30.779 "data_offset": 2048, 00:15:30.779 "data_size": 63488 00:15:30.779 }, 00:15:30.779 { 00:15:30.779 "name": "BaseBdev3", 00:15:30.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.779 "is_configured": false, 00:15:30.779 "data_offset": 0, 00:15:30.779 "data_size": 0 00:15:30.779 }, 00:15:30.779 { 00:15:30.779 "name": "BaseBdev4", 00:15:30.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.779 "is_configured": false, 00:15:30.779 "data_offset": 0, 00:15:30.779 "data_size": 0 00:15:30.779 } 00:15:30.779 ] 00:15:30.779 }' 00:15:30.779 06:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.779 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.036 06:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.293 [2024-07-23 06:28:43.699410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.293 BaseBdev3 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:31.293 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.551 06:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.809 [ 00:15:31.809 { 00:15:31.809 "name": "BaseBdev3", 00:15:31.809 "aliases": [ 00:15:31.809 "cf1393bb-48bc-11ef-a06c-59ddad71024c" 00:15:31.809 ], 00:15:31.809 "product_name": "Malloc disk", 00:15:31.809 "block_size": 512, 00:15:31.809 "num_blocks": 65536, 00:15:31.809 "uuid": "cf1393bb-48bc-11ef-a06c-59ddad71024c", 00:15:31.809 "assigned_rate_limits": { 00:15:31.809 "rw_ios_per_sec": 0, 00:15:31.809 "rw_mbytes_per_sec": 0, 00:15:31.809 "r_mbytes_per_sec": 0, 00:15:31.809 "w_mbytes_per_sec": 0 00:15:31.809 }, 00:15:31.809 "claimed": true, 00:15:31.809 "claim_type": "exclusive_write", 00:15:31.809 "zoned": false, 00:15:31.809 "supported_io_types": { 00:15:31.809 "read": true, 00:15:31.809 "write": true, 00:15:31.809 "unmap": true, 00:15:31.809 "flush": true, 00:15:31.809 "reset": true, 00:15:31.809 "nvme_admin": false, 00:15:31.809 "nvme_io": false, 00:15:31.809 "nvme_io_md": false, 00:15:31.809 "write_zeroes": true, 00:15:31.809 "zcopy": true, 00:15:31.809 "get_zone_info": false, 00:15:31.809 "zone_management": false, 00:15:31.809 "zone_append": false, 00:15:31.809 "compare": false, 00:15:31.809 "compare_and_write": false, 00:15:31.809 "abort": true, 00:15:31.809 "seek_hole": false, 00:15:31.809 "seek_data": false, 00:15:31.809 "copy": true, 00:15:31.809 "nvme_iov_md": false 00:15:31.809 }, 00:15:31.809 "memory_domains": [ 00:15:31.809 { 00:15:31.809 "dma_device_id": "system", 00:15:31.809 "dma_device_type": 1 00:15:31.809 }, 00:15:31.809 { 00:15:31.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.809 "dma_device_type": 2 00:15:31.809 } 00:15:31.809 ], 00:15:31.809 "driver_specific": {} 00:15:31.809 } 00:15:31.809 ] 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.809 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.067 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.067 "name": "Existed_Raid", 00:15:32.067 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:32.067 "strip_size_kb": 64, 00:15:32.067 "state": "configuring", 00:15:32.067 "raid_level": "raid0", 00:15:32.067 "superblock": true, 00:15:32.067 "num_base_bdevs": 4, 00:15:32.067 "num_base_bdevs_discovered": 3, 00:15:32.067 "num_base_bdevs_operational": 4, 00:15:32.067 "base_bdevs_list": [ 00:15:32.067 { 00:15:32.067 "name": "BaseBdev1", 00:15:32.067 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:32.067 "is_configured": true, 00:15:32.067 "data_offset": 2048, 00:15:32.067 "data_size": 63488 00:15:32.067 }, 00:15:32.067 { 00:15:32.067 "name": "BaseBdev2", 00:15:32.067 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:32.067 "is_configured": true, 00:15:32.067 "data_offset": 2048, 00:15:32.067 "data_size": 63488 00:15:32.067 }, 00:15:32.067 { 00:15:32.067 "name": "BaseBdev3", 00:15:32.067 "uuid": "cf1393bb-48bc-11ef-a06c-59ddad71024c", 00:15:32.067 "is_configured": true, 00:15:32.067 "data_offset": 2048, 00:15:32.067 "data_size": 63488 00:15:32.067 }, 00:15:32.067 { 00:15:32.067 "name": "BaseBdev4", 00:15:32.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.067 "is_configured": false, 00:15:32.067 "data_offset": 0, 00:15:32.067 "data_size": 0 00:15:32.067 } 00:15:32.067 ] 00:15:32.067 }' 00:15:32.067 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.067 06:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.326 06:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:32.584 [2024-07-23 06:28:45.051510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.584 [2024-07-23 06:28:45.051585] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d1539a34a00 00:15:32.584 [2024-07-23 06:28:45.051592] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:32.584 [2024-07-23 06:28:45.051615] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d1539a97e20 00:15:32.584 [2024-07-23 06:28:45.051672] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d1539a34a00 00:15:32.584 [2024-07-23 06:28:45.051677] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d1539a34a00 00:15:32.584 [2024-07-23 06:28:45.051699] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.584 BaseBdev4 00:15:32.584 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:32.584 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:32.584 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:32.584 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:32.584 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:32.584 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:32.585 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:33.149 [ 00:15:33.149 { 00:15:33.149 "name": "BaseBdev4", 00:15:33.149 "aliases": [ 00:15:33.149 "cfe1e352-48bc-11ef-a06c-59ddad71024c" 00:15:33.149 ], 00:15:33.149 "product_name": "Malloc disk", 00:15:33.149 "block_size": 512, 00:15:33.149 "num_blocks": 65536, 00:15:33.149 "uuid": "cfe1e352-48bc-11ef-a06c-59ddad71024c", 00:15:33.149 "assigned_rate_limits": { 00:15:33.149 "rw_ios_per_sec": 0, 00:15:33.149 "rw_mbytes_per_sec": 0, 00:15:33.149 "r_mbytes_per_sec": 0, 00:15:33.149 "w_mbytes_per_sec": 0 00:15:33.149 }, 00:15:33.149 "claimed": true, 00:15:33.149 "claim_type": "exclusive_write", 00:15:33.149 "zoned": false, 00:15:33.149 "supported_io_types": { 00:15:33.149 "read": true, 00:15:33.149 "write": true, 00:15:33.149 "unmap": true, 00:15:33.149 "flush": true, 00:15:33.149 "reset": true, 00:15:33.149 "nvme_admin": false, 00:15:33.149 "nvme_io": false, 00:15:33.149 "nvme_io_md": false, 00:15:33.149 "write_zeroes": true, 00:15:33.149 "zcopy": true, 00:15:33.149 "get_zone_info": false, 00:15:33.149 "zone_management": false, 00:15:33.149 "zone_append": false, 00:15:33.149 "compare": false, 00:15:33.149 "compare_and_write": false, 00:15:33.149 "abort": true, 00:15:33.149 "seek_hole": false, 00:15:33.149 "seek_data": false, 00:15:33.149 "copy": true, 00:15:33.149 "nvme_iov_md": false 00:15:33.149 }, 00:15:33.149 "memory_domains": [ 00:15:33.149 { 00:15:33.149 "dma_device_id": "system", 00:15:33.149 "dma_device_type": 1 00:15:33.149 }, 00:15:33.149 { 00:15:33.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.149 "dma_device_type": 2 00:15:33.149 } 00:15:33.149 ], 00:15:33.149 "driver_specific": {} 00:15:33.149 } 00:15:33.149 ] 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.149 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.714 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.714 "name": "Existed_Raid", 00:15:33.714 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:33.714 "strip_size_kb": 64, 00:15:33.714 "state": "online", 00:15:33.714 "raid_level": "raid0", 00:15:33.714 "superblock": true, 00:15:33.714 "num_base_bdevs": 4, 00:15:33.714 "num_base_bdevs_discovered": 4, 00:15:33.714 "num_base_bdevs_operational": 4, 00:15:33.714 "base_bdevs_list": [ 00:15:33.714 { 00:15:33.714 "name": "BaseBdev1", 00:15:33.714 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:33.714 "is_configured": true, 00:15:33.714 "data_offset": 2048, 00:15:33.714 "data_size": 63488 00:15:33.714 }, 00:15:33.714 { 00:15:33.714 "name": "BaseBdev2", 00:15:33.714 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:33.714 "is_configured": true, 00:15:33.714 "data_offset": 2048, 00:15:33.714 "data_size": 63488 00:15:33.714 }, 00:15:33.714 { 00:15:33.714 "name": "BaseBdev3", 00:15:33.714 "uuid": "cf1393bb-48bc-11ef-a06c-59ddad71024c", 00:15:33.714 "is_configured": true, 00:15:33.714 "data_offset": 2048, 00:15:33.714 "data_size": 63488 00:15:33.714 }, 00:15:33.714 { 00:15:33.714 "name": "BaseBdev4", 00:15:33.714 "uuid": "cfe1e352-48bc-11ef-a06c-59ddad71024c", 00:15:33.714 "is_configured": true, 00:15:33.714 "data_offset": 2048, 00:15:33.714 "data_size": 63488 00:15:33.714 } 00:15:33.714 ] 00:15:33.714 }' 00:15:33.714 06:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.714 06:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:33.972 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:34.231 [2024-07-23 06:28:46.559557] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.231 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:34.231 "name": "Existed_Raid", 00:15:34.231 "aliases": [ 00:15:34.231 "cdbf97ee-48bc-11ef-a06c-59ddad71024c" 00:15:34.231 ], 00:15:34.231 "product_name": "Raid Volume", 00:15:34.231 "block_size": 512, 00:15:34.231 "num_blocks": 253952, 00:15:34.231 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:34.231 "assigned_rate_limits": { 00:15:34.231 "rw_ios_per_sec": 0, 00:15:34.231 "rw_mbytes_per_sec": 0, 00:15:34.231 "r_mbytes_per_sec": 0, 00:15:34.231 "w_mbytes_per_sec": 0 00:15:34.231 }, 00:15:34.231 "claimed": false, 00:15:34.231 "zoned": false, 00:15:34.231 "supported_io_types": { 00:15:34.231 "read": true, 00:15:34.231 "write": true, 00:15:34.231 "unmap": true, 00:15:34.231 "flush": true, 00:15:34.231 "reset": true, 00:15:34.231 "nvme_admin": false, 00:15:34.231 "nvme_io": false, 00:15:34.231 "nvme_io_md": false, 00:15:34.231 "write_zeroes": true, 00:15:34.231 "zcopy": false, 00:15:34.231 "get_zone_info": false, 00:15:34.231 "zone_management": false, 00:15:34.231 "zone_append": false, 00:15:34.231 "compare": false, 00:15:34.231 "compare_and_write": false, 00:15:34.231 "abort": false, 00:15:34.231 "seek_hole": false, 00:15:34.231 "seek_data": false, 00:15:34.231 "copy": false, 00:15:34.231 "nvme_iov_md": false 00:15:34.231 }, 00:15:34.231 "memory_domains": [ 00:15:34.231 { 00:15:34.231 "dma_device_id": "system", 00:15:34.231 "dma_device_type": 1 00:15:34.231 }, 00:15:34.231 { 00:15:34.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.231 "dma_device_type": 2 00:15:34.231 }, 00:15:34.231 { 00:15:34.231 "dma_device_id": "system", 00:15:34.231 "dma_device_type": 1 00:15:34.231 }, 00:15:34.231 { 00:15:34.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.232 "dma_device_type": 2 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "dma_device_id": "system", 00:15:34.232 "dma_device_type": 1 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.232 "dma_device_type": 2 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "dma_device_id": "system", 00:15:34.232 "dma_device_type": 1 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.232 "dma_device_type": 2 00:15:34.232 } 00:15:34.232 ], 00:15:34.232 "driver_specific": { 00:15:34.232 "raid": { 00:15:34.232 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:34.232 "strip_size_kb": 64, 00:15:34.232 "state": "online", 00:15:34.232 "raid_level": "raid0", 00:15:34.232 "superblock": true, 00:15:34.232 "num_base_bdevs": 4, 00:15:34.232 "num_base_bdevs_discovered": 4, 00:15:34.232 "num_base_bdevs_operational": 4, 00:15:34.232 "base_bdevs_list": [ 00:15:34.232 { 00:15:34.232 "name": "BaseBdev1", 00:15:34.232 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:34.232 "is_configured": true, 00:15:34.232 "data_offset": 2048, 00:15:34.232 "data_size": 63488 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "name": "BaseBdev2", 00:15:34.232 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:34.232 "is_configured": true, 00:15:34.232 "data_offset": 2048, 00:15:34.232 "data_size": 63488 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "name": "BaseBdev3", 00:15:34.232 "uuid": "cf1393bb-48bc-11ef-a06c-59ddad71024c", 00:15:34.232 "is_configured": true, 00:15:34.232 "data_offset": 2048, 00:15:34.232 "data_size": 63488 00:15:34.232 }, 00:15:34.232 { 00:15:34.232 "name": "BaseBdev4", 00:15:34.232 "uuid": "cfe1e352-48bc-11ef-a06c-59ddad71024c", 00:15:34.232 "is_configured": true, 00:15:34.232 "data_offset": 2048, 00:15:34.232 "data_size": 63488 00:15:34.232 } 00:15:34.232 ] 00:15:34.232 } 00:15:34.232 } 00:15:34.232 }' 00:15:34.232 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.232 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:34.232 BaseBdev2 00:15:34.232 BaseBdev3 00:15:34.232 BaseBdev4' 00:15:34.232 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.232 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.232 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.491 "name": "BaseBdev1", 00:15:34.491 "aliases": [ 00:15:34.491 "ccb6af30-48bc-11ef-a06c-59ddad71024c" 00:15:34.491 ], 00:15:34.491 "product_name": "Malloc disk", 00:15:34.491 "block_size": 512, 00:15:34.491 "num_blocks": 65536, 00:15:34.491 "uuid": "ccb6af30-48bc-11ef-a06c-59ddad71024c", 00:15:34.491 "assigned_rate_limits": { 00:15:34.491 "rw_ios_per_sec": 0, 00:15:34.491 "rw_mbytes_per_sec": 0, 00:15:34.491 "r_mbytes_per_sec": 0, 00:15:34.491 "w_mbytes_per_sec": 0 00:15:34.491 }, 00:15:34.491 "claimed": true, 00:15:34.491 "claim_type": "exclusive_write", 00:15:34.491 "zoned": false, 00:15:34.491 "supported_io_types": { 00:15:34.491 "read": true, 00:15:34.491 "write": true, 00:15:34.491 "unmap": true, 00:15:34.491 "flush": true, 00:15:34.491 "reset": true, 00:15:34.491 "nvme_admin": false, 00:15:34.491 "nvme_io": false, 00:15:34.491 "nvme_io_md": false, 00:15:34.491 "write_zeroes": true, 00:15:34.491 "zcopy": true, 00:15:34.491 "get_zone_info": false, 00:15:34.491 "zone_management": false, 00:15:34.491 "zone_append": false, 00:15:34.491 "compare": false, 00:15:34.491 "compare_and_write": false, 00:15:34.491 "abort": true, 00:15:34.491 "seek_hole": false, 00:15:34.491 "seek_data": false, 00:15:34.491 "copy": true, 00:15:34.491 "nvme_iov_md": false 00:15:34.491 }, 00:15:34.491 "memory_domains": [ 00:15:34.491 { 00:15:34.491 "dma_device_id": "system", 00:15:34.491 "dma_device_type": 1 00:15:34.491 }, 00:15:34.491 { 00:15:34.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.491 "dma_device_type": 2 00:15:34.491 } 00:15:34.491 ], 00:15:34.491 "driver_specific": {} 00:15:34.491 }' 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.491 06:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:34.749 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.749 "name": "BaseBdev2", 00:15:34.749 "aliases": [ 00:15:34.749 "ce4bfbac-48bc-11ef-a06c-59ddad71024c" 00:15:34.749 ], 00:15:34.749 "product_name": "Malloc disk", 00:15:34.749 "block_size": 512, 00:15:34.749 "num_blocks": 65536, 00:15:34.749 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:34.749 "assigned_rate_limits": { 00:15:34.749 "rw_ios_per_sec": 0, 00:15:34.749 "rw_mbytes_per_sec": 0, 00:15:34.749 "r_mbytes_per_sec": 0, 00:15:34.749 "w_mbytes_per_sec": 0 00:15:34.749 }, 00:15:34.749 "claimed": true, 00:15:34.749 "claim_type": "exclusive_write", 00:15:34.749 "zoned": false, 00:15:34.749 "supported_io_types": { 00:15:34.749 "read": true, 00:15:34.749 "write": true, 00:15:34.749 "unmap": true, 00:15:34.749 "flush": true, 00:15:34.749 "reset": true, 00:15:34.749 "nvme_admin": false, 00:15:34.749 "nvme_io": false, 00:15:34.749 "nvme_io_md": false, 00:15:34.749 "write_zeroes": true, 00:15:34.749 "zcopy": true, 00:15:34.749 "get_zone_info": false, 00:15:34.749 "zone_management": false, 00:15:34.749 "zone_append": false, 00:15:34.749 "compare": false, 00:15:34.749 "compare_and_write": false, 00:15:34.749 "abort": true, 00:15:34.749 "seek_hole": false, 00:15:34.749 "seek_data": false, 00:15:34.749 "copy": true, 00:15:34.749 "nvme_iov_md": false 00:15:34.749 }, 00:15:34.749 "memory_domains": [ 00:15:34.749 { 00:15:34.749 "dma_device_id": "system", 00:15:34.749 "dma_device_type": 1 00:15:34.749 }, 00:15:34.749 { 00:15:34.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.749 "dma_device_type": 2 00:15:34.749 } 00:15:34.749 ], 00:15:34.749 "driver_specific": {} 00:15:34.749 }' 00:15:34.749 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.749 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:34.750 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.008 "name": "BaseBdev3", 00:15:35.008 "aliases": [ 00:15:35.008 "cf1393bb-48bc-11ef-a06c-59ddad71024c" 00:15:35.008 ], 00:15:35.008 "product_name": "Malloc disk", 00:15:35.008 "block_size": 512, 00:15:35.008 "num_blocks": 65536, 00:15:35.008 "uuid": "cf1393bb-48bc-11ef-a06c-59ddad71024c", 00:15:35.008 "assigned_rate_limits": { 00:15:35.008 "rw_ios_per_sec": 0, 00:15:35.008 "rw_mbytes_per_sec": 0, 00:15:35.008 "r_mbytes_per_sec": 0, 00:15:35.008 "w_mbytes_per_sec": 0 00:15:35.008 }, 00:15:35.008 "claimed": true, 00:15:35.008 "claim_type": "exclusive_write", 00:15:35.008 "zoned": false, 00:15:35.008 "supported_io_types": { 00:15:35.008 "read": true, 00:15:35.008 "write": true, 00:15:35.008 "unmap": true, 00:15:35.008 "flush": true, 00:15:35.008 "reset": true, 00:15:35.008 "nvme_admin": false, 00:15:35.008 "nvme_io": false, 00:15:35.008 "nvme_io_md": false, 00:15:35.008 "write_zeroes": true, 00:15:35.008 "zcopy": true, 00:15:35.008 "get_zone_info": false, 00:15:35.008 "zone_management": false, 00:15:35.008 "zone_append": false, 00:15:35.008 "compare": false, 00:15:35.008 "compare_and_write": false, 00:15:35.008 "abort": true, 00:15:35.008 "seek_hole": false, 00:15:35.008 "seek_data": false, 00:15:35.008 "copy": true, 00:15:35.008 "nvme_iov_md": false 00:15:35.008 }, 00:15:35.008 "memory_domains": [ 00:15:35.008 { 00:15:35.008 "dma_device_id": "system", 00:15:35.008 "dma_device_type": 1 00:15:35.008 }, 00:15:35.008 { 00:15:35.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.008 "dma_device_type": 2 00:15:35.008 } 00:15:35.008 ], 00:15:35.008 "driver_specific": {} 00:15:35.008 }' 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.008 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.266 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.266 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.266 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:35.266 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.524 "name": "BaseBdev4", 00:15:35.524 "aliases": [ 00:15:35.524 "cfe1e352-48bc-11ef-a06c-59ddad71024c" 00:15:35.524 ], 00:15:35.524 "product_name": "Malloc disk", 00:15:35.524 "block_size": 512, 00:15:35.524 "num_blocks": 65536, 00:15:35.524 "uuid": "cfe1e352-48bc-11ef-a06c-59ddad71024c", 00:15:35.524 "assigned_rate_limits": { 00:15:35.524 "rw_ios_per_sec": 0, 00:15:35.524 "rw_mbytes_per_sec": 0, 00:15:35.524 "r_mbytes_per_sec": 0, 00:15:35.524 "w_mbytes_per_sec": 0 00:15:35.524 }, 00:15:35.524 "claimed": true, 00:15:35.524 "claim_type": "exclusive_write", 00:15:35.524 "zoned": false, 00:15:35.524 "supported_io_types": { 00:15:35.524 "read": true, 00:15:35.524 "write": true, 00:15:35.524 "unmap": true, 00:15:35.524 "flush": true, 00:15:35.524 "reset": true, 00:15:35.524 "nvme_admin": false, 00:15:35.524 "nvme_io": false, 00:15:35.524 "nvme_io_md": false, 00:15:35.524 "write_zeroes": true, 00:15:35.524 "zcopy": true, 00:15:35.524 "get_zone_info": false, 00:15:35.524 "zone_management": false, 00:15:35.524 "zone_append": false, 00:15:35.524 "compare": false, 00:15:35.524 "compare_and_write": false, 00:15:35.524 "abort": true, 00:15:35.524 "seek_hole": false, 00:15:35.524 "seek_data": false, 00:15:35.524 "copy": true, 00:15:35.524 "nvme_iov_md": false 00:15:35.524 }, 00:15:35.524 "memory_domains": [ 00:15:35.524 { 00:15:35.524 "dma_device_id": "system", 00:15:35.524 "dma_device_type": 1 00:15:35.524 }, 00:15:35.524 { 00:15:35.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.524 "dma_device_type": 2 00:15:35.524 } 00:15:35.524 ], 00:15:35.524 "driver_specific": {} 00:15:35.524 }' 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.524 06:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:35.782 [2024-07-23 06:28:48.143664] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.782 [2024-07-23 06:28:48.143691] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.782 [2024-07-23 06:28:48.143708] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.782 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.040 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.040 "name": "Existed_Raid", 00:15:36.040 "uuid": "cdbf97ee-48bc-11ef-a06c-59ddad71024c", 00:15:36.040 "strip_size_kb": 64, 00:15:36.040 "state": "offline", 00:15:36.040 "raid_level": "raid0", 00:15:36.040 "superblock": true, 00:15:36.040 "num_base_bdevs": 4, 00:15:36.040 "num_base_bdevs_discovered": 3, 00:15:36.040 "num_base_bdevs_operational": 3, 00:15:36.040 "base_bdevs_list": [ 00:15:36.040 { 00:15:36.040 "name": null, 00:15:36.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.040 "is_configured": false, 00:15:36.040 "data_offset": 2048, 00:15:36.040 "data_size": 63488 00:15:36.040 }, 00:15:36.040 { 00:15:36.040 "name": "BaseBdev2", 00:15:36.040 "uuid": "ce4bfbac-48bc-11ef-a06c-59ddad71024c", 00:15:36.040 "is_configured": true, 00:15:36.040 "data_offset": 2048, 00:15:36.040 "data_size": 63488 00:15:36.040 }, 00:15:36.040 { 00:15:36.040 "name": "BaseBdev3", 00:15:36.040 "uuid": "cf1393bb-48bc-11ef-a06c-59ddad71024c", 00:15:36.040 "is_configured": true, 00:15:36.040 "data_offset": 2048, 00:15:36.040 "data_size": 63488 00:15:36.040 }, 00:15:36.040 { 00:15:36.040 "name": "BaseBdev4", 00:15:36.040 "uuid": "cfe1e352-48bc-11ef-a06c-59ddad71024c", 00:15:36.040 "is_configured": true, 00:15:36.040 "data_offset": 2048, 00:15:36.040 "data_size": 63488 00:15:36.040 } 00:15:36.040 ] 00:15:36.040 }' 00:15:36.040 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.040 06:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.298 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:36.298 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:36.298 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.298 06:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:36.556 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:36.556 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.556 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:36.814 [2024-07-23 06:28:49.333734] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.071 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.071 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.071 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.071 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.329 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.329 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.329 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:37.586 [2024-07-23 06:28:49.916195] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.586 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.586 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.586 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.586 06:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.844 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.844 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.844 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:38.101 [2024-07-23 06:28:50.442577] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:38.101 [2024-07-23 06:28:50.442630] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1539a34a00 name Existed_Raid, state offline 00:15:38.101 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:38.101 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:38.101 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.102 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.360 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:38.360 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:38.360 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:38.360 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:38.360 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:38.360 06:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.617 BaseBdev2 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:38.617 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.876 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.134 [ 00:15:39.134 { 00:15:39.134 "name": "BaseBdev2", 00:15:39.134 "aliases": [ 00:15:39.134 "d3732b09-48bc-11ef-a06c-59ddad71024c" 00:15:39.134 ], 00:15:39.134 "product_name": "Malloc disk", 00:15:39.134 "block_size": 512, 00:15:39.134 "num_blocks": 65536, 00:15:39.134 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:39.134 "assigned_rate_limits": { 00:15:39.134 "rw_ios_per_sec": 0, 00:15:39.134 "rw_mbytes_per_sec": 0, 00:15:39.134 "r_mbytes_per_sec": 0, 00:15:39.134 "w_mbytes_per_sec": 0 00:15:39.134 }, 00:15:39.134 "claimed": false, 00:15:39.134 "zoned": false, 00:15:39.134 "supported_io_types": { 00:15:39.134 "read": true, 00:15:39.134 "write": true, 00:15:39.134 "unmap": true, 00:15:39.134 "flush": true, 00:15:39.134 "reset": true, 00:15:39.134 "nvme_admin": false, 00:15:39.134 "nvme_io": false, 00:15:39.134 "nvme_io_md": false, 00:15:39.134 "write_zeroes": true, 00:15:39.134 "zcopy": true, 00:15:39.134 "get_zone_info": false, 00:15:39.134 "zone_management": false, 00:15:39.134 "zone_append": false, 00:15:39.134 "compare": false, 00:15:39.134 "compare_and_write": false, 00:15:39.134 "abort": true, 00:15:39.134 "seek_hole": false, 00:15:39.134 "seek_data": false, 00:15:39.134 "copy": true, 00:15:39.134 "nvme_iov_md": false 00:15:39.134 }, 00:15:39.134 "memory_domains": [ 00:15:39.134 { 00:15:39.134 "dma_device_id": "system", 00:15:39.134 "dma_device_type": 1 00:15:39.134 }, 00:15:39.134 { 00:15:39.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.134 "dma_device_type": 2 00:15:39.134 } 00:15:39.134 ], 00:15:39.134 "driver_specific": {} 00:15:39.134 } 00:15:39.134 ] 00:15:39.134 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:39.134 06:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:39.134 06:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:39.134 06:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.392 BaseBdev3 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:39.392 06:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.650 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.908 [ 00:15:39.908 { 00:15:39.908 "name": "BaseBdev3", 00:15:39.908 "aliases": [ 00:15:39.908 "d3ea3246-48bc-11ef-a06c-59ddad71024c" 00:15:39.908 ], 00:15:39.908 "product_name": "Malloc disk", 00:15:39.908 "block_size": 512, 00:15:39.908 "num_blocks": 65536, 00:15:39.908 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:39.908 "assigned_rate_limits": { 00:15:39.908 "rw_ios_per_sec": 0, 00:15:39.908 "rw_mbytes_per_sec": 0, 00:15:39.908 "r_mbytes_per_sec": 0, 00:15:39.908 "w_mbytes_per_sec": 0 00:15:39.908 }, 00:15:39.908 "claimed": false, 00:15:39.908 "zoned": false, 00:15:39.908 "supported_io_types": { 00:15:39.908 "read": true, 00:15:39.908 "write": true, 00:15:39.908 "unmap": true, 00:15:39.908 "flush": true, 00:15:39.908 "reset": true, 00:15:39.908 "nvme_admin": false, 00:15:39.908 "nvme_io": false, 00:15:39.908 "nvme_io_md": false, 00:15:39.908 "write_zeroes": true, 00:15:39.908 "zcopy": true, 00:15:39.908 "get_zone_info": false, 00:15:39.908 "zone_management": false, 00:15:39.908 "zone_append": false, 00:15:39.908 "compare": false, 00:15:39.908 "compare_and_write": false, 00:15:39.908 "abort": true, 00:15:39.908 "seek_hole": false, 00:15:39.908 "seek_data": false, 00:15:39.908 "copy": true, 00:15:39.908 "nvme_iov_md": false 00:15:39.908 }, 00:15:39.908 "memory_domains": [ 00:15:39.908 { 00:15:39.908 "dma_device_id": "system", 00:15:39.908 "dma_device_type": 1 00:15:39.908 }, 00:15:39.908 { 00:15:39.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.908 "dma_device_type": 2 00:15:39.908 } 00:15:39.908 ], 00:15:39.908 "driver_specific": {} 00:15:39.908 } 00:15:39.908 ] 00:15:39.908 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:39.908 06:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:39.908 06:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:39.908 06:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:40.167 BaseBdev4 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:40.167 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.424 06:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:40.682 [ 00:15:40.682 { 00:15:40.682 "name": "BaseBdev4", 00:15:40.682 "aliases": [ 00:15:40.682 "d45d909a-48bc-11ef-a06c-59ddad71024c" 00:15:40.682 ], 00:15:40.682 "product_name": "Malloc disk", 00:15:40.682 "block_size": 512, 00:15:40.682 "num_blocks": 65536, 00:15:40.682 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:40.682 "assigned_rate_limits": { 00:15:40.682 "rw_ios_per_sec": 0, 00:15:40.682 "rw_mbytes_per_sec": 0, 00:15:40.682 "r_mbytes_per_sec": 0, 00:15:40.682 "w_mbytes_per_sec": 0 00:15:40.682 }, 00:15:40.682 "claimed": false, 00:15:40.682 "zoned": false, 00:15:40.682 "supported_io_types": { 00:15:40.682 "read": true, 00:15:40.682 "write": true, 00:15:40.682 "unmap": true, 00:15:40.682 "flush": true, 00:15:40.682 "reset": true, 00:15:40.682 "nvme_admin": false, 00:15:40.682 "nvme_io": false, 00:15:40.682 "nvme_io_md": false, 00:15:40.682 "write_zeroes": true, 00:15:40.682 "zcopy": true, 00:15:40.682 "get_zone_info": false, 00:15:40.682 "zone_management": false, 00:15:40.682 "zone_append": false, 00:15:40.682 "compare": false, 00:15:40.682 "compare_and_write": false, 00:15:40.682 "abort": true, 00:15:40.682 "seek_hole": false, 00:15:40.682 "seek_data": false, 00:15:40.682 "copy": true, 00:15:40.682 "nvme_iov_md": false 00:15:40.682 }, 00:15:40.682 "memory_domains": [ 00:15:40.682 { 00:15:40.682 "dma_device_id": "system", 00:15:40.682 "dma_device_type": 1 00:15:40.682 }, 00:15:40.682 { 00:15:40.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.682 "dma_device_type": 2 00:15:40.682 } 00:15:40.682 ], 00:15:40.682 "driver_specific": {} 00:15:40.682 } 00:15:40.682 ] 00:15:40.682 06:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:40.682 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:40.682 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:40.682 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:40.939 [2024-07-23 06:28:53.332847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.939 [2024-07-23 06:28:53.332911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.939 [2024-07-23 06:28:53.332937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.939 [2024-07-23 06:28:53.333508] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.939 [2024-07-23 06:28:53.333528] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.939 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.197 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.197 "name": "Existed_Raid", 00:15:41.197 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:41.197 "strip_size_kb": 64, 00:15:41.197 "state": "configuring", 00:15:41.197 "raid_level": "raid0", 00:15:41.197 "superblock": true, 00:15:41.197 "num_base_bdevs": 4, 00:15:41.197 "num_base_bdevs_discovered": 3, 00:15:41.197 "num_base_bdevs_operational": 4, 00:15:41.197 "base_bdevs_list": [ 00:15:41.197 { 00:15:41.197 "name": "BaseBdev1", 00:15:41.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.197 "is_configured": false, 00:15:41.197 "data_offset": 0, 00:15:41.197 "data_size": 0 00:15:41.197 }, 00:15:41.197 { 00:15:41.197 "name": "BaseBdev2", 00:15:41.197 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:41.197 "is_configured": true, 00:15:41.197 "data_offset": 2048, 00:15:41.197 "data_size": 63488 00:15:41.197 }, 00:15:41.197 { 00:15:41.197 "name": "BaseBdev3", 00:15:41.197 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:41.197 "is_configured": true, 00:15:41.197 "data_offset": 2048, 00:15:41.197 "data_size": 63488 00:15:41.197 }, 00:15:41.197 { 00:15:41.197 "name": "BaseBdev4", 00:15:41.197 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:41.197 "is_configured": true, 00:15:41.197 "data_offset": 2048, 00:15:41.197 "data_size": 63488 00:15:41.197 } 00:15:41.197 ] 00:15:41.197 }' 00:15:41.197 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.197 06:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.549 06:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:41.807 [2024-07-23 06:28:54.192922] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.807 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.066 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.066 "name": "Existed_Raid", 00:15:42.066 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:42.066 "strip_size_kb": 64, 00:15:42.066 "state": "configuring", 00:15:42.066 "raid_level": "raid0", 00:15:42.066 "superblock": true, 00:15:42.066 "num_base_bdevs": 4, 00:15:42.066 "num_base_bdevs_discovered": 2, 00:15:42.066 "num_base_bdevs_operational": 4, 00:15:42.066 "base_bdevs_list": [ 00:15:42.066 { 00:15:42.066 "name": "BaseBdev1", 00:15:42.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.066 "is_configured": false, 00:15:42.066 "data_offset": 0, 00:15:42.066 "data_size": 0 00:15:42.066 }, 00:15:42.066 { 00:15:42.066 "name": null, 00:15:42.066 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:42.066 "is_configured": false, 00:15:42.066 "data_offset": 2048, 00:15:42.066 "data_size": 63488 00:15:42.066 }, 00:15:42.066 { 00:15:42.066 "name": "BaseBdev3", 00:15:42.066 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:42.066 "is_configured": true, 00:15:42.066 "data_offset": 2048, 00:15:42.066 "data_size": 63488 00:15:42.066 }, 00:15:42.066 { 00:15:42.066 "name": "BaseBdev4", 00:15:42.066 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:42.066 "is_configured": true, 00:15:42.066 "data_offset": 2048, 00:15:42.066 "data_size": 63488 00:15:42.067 } 00:15:42.067 ] 00:15:42.067 }' 00:15:42.067 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.067 06:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.325 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.325 06:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.583 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:42.583 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.842 [2024-07-23 06:28:55.321162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.842 BaseBdev1 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.842 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.100 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.360 [ 00:15:43.360 { 00:15:43.360 "name": "BaseBdev1", 00:15:43.360 "aliases": [ 00:15:43.360 "d600e9d0-48bc-11ef-a06c-59ddad71024c" 00:15:43.360 ], 00:15:43.360 "product_name": "Malloc disk", 00:15:43.360 "block_size": 512, 00:15:43.360 "num_blocks": 65536, 00:15:43.360 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:43.360 "assigned_rate_limits": { 00:15:43.360 "rw_ios_per_sec": 0, 00:15:43.360 "rw_mbytes_per_sec": 0, 00:15:43.360 "r_mbytes_per_sec": 0, 00:15:43.360 "w_mbytes_per_sec": 0 00:15:43.360 }, 00:15:43.360 "claimed": true, 00:15:43.360 "claim_type": "exclusive_write", 00:15:43.360 "zoned": false, 00:15:43.360 "supported_io_types": { 00:15:43.360 "read": true, 00:15:43.360 "write": true, 00:15:43.360 "unmap": true, 00:15:43.360 "flush": true, 00:15:43.360 "reset": true, 00:15:43.360 "nvme_admin": false, 00:15:43.360 "nvme_io": false, 00:15:43.360 "nvme_io_md": false, 00:15:43.360 "write_zeroes": true, 00:15:43.360 "zcopy": true, 00:15:43.360 "get_zone_info": false, 00:15:43.360 "zone_management": false, 00:15:43.360 "zone_append": false, 00:15:43.360 "compare": false, 00:15:43.360 "compare_and_write": false, 00:15:43.360 "abort": true, 00:15:43.360 "seek_hole": false, 00:15:43.360 "seek_data": false, 00:15:43.360 "copy": true, 00:15:43.360 "nvme_iov_md": false 00:15:43.360 }, 00:15:43.360 "memory_domains": [ 00:15:43.360 { 00:15:43.360 "dma_device_id": "system", 00:15:43.360 "dma_device_type": 1 00:15:43.360 }, 00:15:43.360 { 00:15:43.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.360 "dma_device_type": 2 00:15:43.360 } 00:15:43.360 ], 00:15:43.360 "driver_specific": {} 00:15:43.360 } 00:15:43.360 ] 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.360 06:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.928 06:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.928 "name": "Existed_Raid", 00:15:43.928 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:43.928 "strip_size_kb": 64, 00:15:43.928 "state": "configuring", 00:15:43.928 "raid_level": "raid0", 00:15:43.928 "superblock": true, 00:15:43.928 "num_base_bdevs": 4, 00:15:43.928 "num_base_bdevs_discovered": 3, 00:15:43.928 "num_base_bdevs_operational": 4, 00:15:43.928 "base_bdevs_list": [ 00:15:43.928 { 00:15:43.928 "name": "BaseBdev1", 00:15:43.928 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:43.928 "is_configured": true, 00:15:43.928 "data_offset": 2048, 00:15:43.928 "data_size": 63488 00:15:43.928 }, 00:15:43.928 { 00:15:43.928 "name": null, 00:15:43.928 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:43.928 "is_configured": false, 00:15:43.928 "data_offset": 2048, 00:15:43.928 "data_size": 63488 00:15:43.928 }, 00:15:43.928 { 00:15:43.928 "name": "BaseBdev3", 00:15:43.928 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:43.928 "is_configured": true, 00:15:43.928 "data_offset": 2048, 00:15:43.928 "data_size": 63488 00:15:43.928 }, 00:15:43.928 { 00:15:43.928 "name": "BaseBdev4", 00:15:43.928 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:43.928 "is_configured": true, 00:15:43.928 "data_offset": 2048, 00:15:43.928 "data_size": 63488 00:15:43.928 } 00:15:43.928 ] 00:15:43.928 }' 00:15:43.928 06:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.928 06:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.186 06:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.186 06:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.444 06:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:44.444 06:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:44.703 [2024-07-23 06:28:57.085175] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.703 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.961 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.961 "name": "Existed_Raid", 00:15:44.961 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:44.961 "strip_size_kb": 64, 00:15:44.961 "state": "configuring", 00:15:44.961 "raid_level": "raid0", 00:15:44.961 "superblock": true, 00:15:44.961 "num_base_bdevs": 4, 00:15:44.961 "num_base_bdevs_discovered": 2, 00:15:44.961 "num_base_bdevs_operational": 4, 00:15:44.961 "base_bdevs_list": [ 00:15:44.961 { 00:15:44.961 "name": "BaseBdev1", 00:15:44.961 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:44.961 "is_configured": true, 00:15:44.961 "data_offset": 2048, 00:15:44.961 "data_size": 63488 00:15:44.961 }, 00:15:44.961 { 00:15:44.961 "name": null, 00:15:44.961 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:44.961 "is_configured": false, 00:15:44.961 "data_offset": 2048, 00:15:44.961 "data_size": 63488 00:15:44.961 }, 00:15:44.961 { 00:15:44.961 "name": null, 00:15:44.961 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:44.961 "is_configured": false, 00:15:44.961 "data_offset": 2048, 00:15:44.961 "data_size": 63488 00:15:44.961 }, 00:15:44.961 { 00:15:44.961 "name": "BaseBdev4", 00:15:44.961 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:44.961 "is_configured": true, 00:15:44.961 "data_offset": 2048, 00:15:44.961 "data_size": 63488 00:15:44.961 } 00:15:44.961 ] 00:15:44.961 }' 00:15:44.961 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.961 06:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.219 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.219 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.478 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:45.478 06:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:45.736 [2024-07-23 06:28:58.181269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.736 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.737 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.737 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.995 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.995 "name": "Existed_Raid", 00:15:45.995 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:45.995 "strip_size_kb": 64, 00:15:45.995 "state": "configuring", 00:15:45.995 "raid_level": "raid0", 00:15:45.995 "superblock": true, 00:15:45.995 "num_base_bdevs": 4, 00:15:45.995 "num_base_bdevs_discovered": 3, 00:15:45.995 "num_base_bdevs_operational": 4, 00:15:45.995 "base_bdevs_list": [ 00:15:45.995 { 00:15:45.995 "name": "BaseBdev1", 00:15:45.995 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:45.995 "is_configured": true, 00:15:45.995 "data_offset": 2048, 00:15:45.995 "data_size": 63488 00:15:45.995 }, 00:15:45.995 { 00:15:45.995 "name": null, 00:15:45.995 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:45.995 "is_configured": false, 00:15:45.995 "data_offset": 2048, 00:15:45.995 "data_size": 63488 00:15:45.995 }, 00:15:45.995 { 00:15:45.995 "name": "BaseBdev3", 00:15:45.995 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:45.995 "is_configured": true, 00:15:45.995 "data_offset": 2048, 00:15:45.995 "data_size": 63488 00:15:45.995 }, 00:15:45.995 { 00:15:45.995 "name": "BaseBdev4", 00:15:45.995 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:45.995 "is_configured": true, 00:15:45.995 "data_offset": 2048, 00:15:45.995 "data_size": 63488 00:15:45.995 } 00:15:45.995 ] 00:15:45.995 }' 00:15:45.995 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.995 06:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.561 06:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.561 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:46.561 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:47.127 [2024-07-23 06:28:59.349296] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.127 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.128 "name": "Existed_Raid", 00:15:47.128 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:47.128 "strip_size_kb": 64, 00:15:47.128 "state": "configuring", 00:15:47.128 "raid_level": "raid0", 00:15:47.128 "superblock": true, 00:15:47.128 "num_base_bdevs": 4, 00:15:47.128 "num_base_bdevs_discovered": 2, 00:15:47.128 "num_base_bdevs_operational": 4, 00:15:47.128 "base_bdevs_list": [ 00:15:47.128 { 00:15:47.128 "name": null, 00:15:47.128 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:47.128 "is_configured": false, 00:15:47.128 "data_offset": 2048, 00:15:47.128 "data_size": 63488 00:15:47.128 }, 00:15:47.128 { 00:15:47.128 "name": null, 00:15:47.128 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:47.128 "is_configured": false, 00:15:47.128 "data_offset": 2048, 00:15:47.128 "data_size": 63488 00:15:47.128 }, 00:15:47.128 { 00:15:47.128 "name": "BaseBdev3", 00:15:47.128 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:47.128 "is_configured": true, 00:15:47.128 "data_offset": 2048, 00:15:47.128 "data_size": 63488 00:15:47.128 }, 00:15:47.128 { 00:15:47.128 "name": "BaseBdev4", 00:15:47.128 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:47.128 "is_configured": true, 00:15:47.128 "data_offset": 2048, 00:15:47.128 "data_size": 63488 00:15:47.128 } 00:15:47.128 ] 00:15:47.128 }' 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.128 06:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.693 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.693 06:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.951 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:47.951 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:48.209 [2024-07-23 06:29:00.499147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.209 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.467 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.467 "name": "Existed_Raid", 00:15:48.467 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:48.467 "strip_size_kb": 64, 00:15:48.467 "state": "configuring", 00:15:48.467 "raid_level": "raid0", 00:15:48.467 "superblock": true, 00:15:48.467 "num_base_bdevs": 4, 00:15:48.467 "num_base_bdevs_discovered": 3, 00:15:48.467 "num_base_bdevs_operational": 4, 00:15:48.467 "base_bdevs_list": [ 00:15:48.467 { 00:15:48.467 "name": null, 00:15:48.467 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:48.467 "is_configured": false, 00:15:48.467 "data_offset": 2048, 00:15:48.467 "data_size": 63488 00:15:48.467 }, 00:15:48.467 { 00:15:48.467 "name": "BaseBdev2", 00:15:48.467 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:48.467 "is_configured": true, 00:15:48.467 "data_offset": 2048, 00:15:48.467 "data_size": 63488 00:15:48.467 }, 00:15:48.467 { 00:15:48.467 "name": "BaseBdev3", 00:15:48.467 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:48.467 "is_configured": true, 00:15:48.467 "data_offset": 2048, 00:15:48.467 "data_size": 63488 00:15:48.467 }, 00:15:48.467 { 00:15:48.467 "name": "BaseBdev4", 00:15:48.467 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:48.467 "is_configured": true, 00:15:48.467 "data_offset": 2048, 00:15:48.467 "data_size": 63488 00:15:48.467 } 00:15:48.467 ] 00:15:48.467 }' 00:15:48.467 06:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.467 06:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.725 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:48.726 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.983 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:48.983 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.983 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:49.241 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d600e9d0-48bc-11ef-a06c-59ddad71024c 00:15:49.499 [2024-07-23 06:29:01.783306] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:49.500 [2024-07-23 06:29:01.783368] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d1539a34f00 00:15:49.500 [2024-07-23 06:29:01.783373] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:49.500 [2024-07-23 06:29:01.783395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d1539a97e20 00:15:49.500 [2024-07-23 06:29:01.783442] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d1539a34f00 00:15:49.500 [2024-07-23 06:29:01.783447] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d1539a34f00 00:15:49.500 [2024-07-23 06:29:01.783467] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.500 NewBaseBdev 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:49.500 06:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.758 06:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:50.017 [ 00:15:50.017 { 00:15:50.017 "name": "NewBaseBdev", 00:15:50.017 "aliases": [ 00:15:50.017 "d600e9d0-48bc-11ef-a06c-59ddad71024c" 00:15:50.017 ], 00:15:50.017 "product_name": "Malloc disk", 00:15:50.017 "block_size": 512, 00:15:50.017 "num_blocks": 65536, 00:15:50.017 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:50.017 "assigned_rate_limits": { 00:15:50.017 "rw_ios_per_sec": 0, 00:15:50.017 "rw_mbytes_per_sec": 0, 00:15:50.017 "r_mbytes_per_sec": 0, 00:15:50.017 "w_mbytes_per_sec": 0 00:15:50.017 }, 00:15:50.017 "claimed": true, 00:15:50.017 "claim_type": "exclusive_write", 00:15:50.017 "zoned": false, 00:15:50.017 "supported_io_types": { 00:15:50.017 "read": true, 00:15:50.017 "write": true, 00:15:50.017 "unmap": true, 00:15:50.017 "flush": true, 00:15:50.017 "reset": true, 00:15:50.017 "nvme_admin": false, 00:15:50.017 "nvme_io": false, 00:15:50.017 "nvme_io_md": false, 00:15:50.017 "write_zeroes": true, 00:15:50.017 "zcopy": true, 00:15:50.017 "get_zone_info": false, 00:15:50.017 "zone_management": false, 00:15:50.017 "zone_append": false, 00:15:50.017 "compare": false, 00:15:50.017 "compare_and_write": false, 00:15:50.017 "abort": true, 00:15:50.017 "seek_hole": false, 00:15:50.017 "seek_data": false, 00:15:50.017 "copy": true, 00:15:50.017 "nvme_iov_md": false 00:15:50.017 }, 00:15:50.017 "memory_domains": [ 00:15:50.017 { 00:15:50.017 "dma_device_id": "system", 00:15:50.017 "dma_device_type": 1 00:15:50.017 }, 00:15:50.017 { 00:15:50.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.017 "dma_device_type": 2 00:15:50.017 } 00:15:50.017 ], 00:15:50.017 "driver_specific": {} 00:15:50.017 } 00:15:50.017 ] 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.017 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.275 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.275 "name": "Existed_Raid", 00:15:50.275 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:50.275 "strip_size_kb": 64, 00:15:50.275 "state": "online", 00:15:50.275 "raid_level": "raid0", 00:15:50.275 "superblock": true, 00:15:50.275 "num_base_bdevs": 4, 00:15:50.275 "num_base_bdevs_discovered": 4, 00:15:50.275 "num_base_bdevs_operational": 4, 00:15:50.275 "base_bdevs_list": [ 00:15:50.275 { 00:15:50.275 "name": "NewBaseBdev", 00:15:50.275 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:50.275 "is_configured": true, 00:15:50.275 "data_offset": 2048, 00:15:50.275 "data_size": 63488 00:15:50.275 }, 00:15:50.275 { 00:15:50.275 "name": "BaseBdev2", 00:15:50.275 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:50.275 "is_configured": true, 00:15:50.275 "data_offset": 2048, 00:15:50.275 "data_size": 63488 00:15:50.275 }, 00:15:50.275 { 00:15:50.275 "name": "BaseBdev3", 00:15:50.275 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:50.275 "is_configured": true, 00:15:50.275 "data_offset": 2048, 00:15:50.275 "data_size": 63488 00:15:50.275 }, 00:15:50.275 { 00:15:50.275 "name": "BaseBdev4", 00:15:50.275 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:50.275 "is_configured": true, 00:15:50.275 "data_offset": 2048, 00:15:50.275 "data_size": 63488 00:15:50.275 } 00:15:50.275 ] 00:15:50.275 }' 00:15:50.275 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.275 06:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:50.534 06:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:50.792 [2024-07-23 06:29:03.135268] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.792 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:50.792 "name": "Existed_Raid", 00:15:50.792 "aliases": [ 00:15:50.792 "d4d18a23-48bc-11ef-a06c-59ddad71024c" 00:15:50.792 ], 00:15:50.792 "product_name": "Raid Volume", 00:15:50.792 "block_size": 512, 00:15:50.792 "num_blocks": 253952, 00:15:50.792 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:50.792 "assigned_rate_limits": { 00:15:50.792 "rw_ios_per_sec": 0, 00:15:50.792 "rw_mbytes_per_sec": 0, 00:15:50.792 "r_mbytes_per_sec": 0, 00:15:50.792 "w_mbytes_per_sec": 0 00:15:50.792 }, 00:15:50.792 "claimed": false, 00:15:50.792 "zoned": false, 00:15:50.792 "supported_io_types": { 00:15:50.792 "read": true, 00:15:50.792 "write": true, 00:15:50.792 "unmap": true, 00:15:50.792 "flush": true, 00:15:50.792 "reset": true, 00:15:50.792 "nvme_admin": false, 00:15:50.792 "nvme_io": false, 00:15:50.792 "nvme_io_md": false, 00:15:50.792 "write_zeroes": true, 00:15:50.792 "zcopy": false, 00:15:50.792 "get_zone_info": false, 00:15:50.792 "zone_management": false, 00:15:50.792 "zone_append": false, 00:15:50.792 "compare": false, 00:15:50.792 "compare_and_write": false, 00:15:50.792 "abort": false, 00:15:50.792 "seek_hole": false, 00:15:50.792 "seek_data": false, 00:15:50.792 "copy": false, 00:15:50.792 "nvme_iov_md": false 00:15:50.792 }, 00:15:50.792 "memory_domains": [ 00:15:50.792 { 00:15:50.792 "dma_device_id": "system", 00:15:50.792 "dma_device_type": 1 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.792 "dma_device_type": 2 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "system", 00:15:50.792 "dma_device_type": 1 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.792 "dma_device_type": 2 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "system", 00:15:50.792 "dma_device_type": 1 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.792 "dma_device_type": 2 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "system", 00:15:50.792 "dma_device_type": 1 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.792 "dma_device_type": 2 00:15:50.792 } 00:15:50.792 ], 00:15:50.792 "driver_specific": { 00:15:50.792 "raid": { 00:15:50.792 "uuid": "d4d18a23-48bc-11ef-a06c-59ddad71024c", 00:15:50.792 "strip_size_kb": 64, 00:15:50.792 "state": "online", 00:15:50.792 "raid_level": "raid0", 00:15:50.792 "superblock": true, 00:15:50.792 "num_base_bdevs": 4, 00:15:50.792 "num_base_bdevs_discovered": 4, 00:15:50.792 "num_base_bdevs_operational": 4, 00:15:50.792 "base_bdevs_list": [ 00:15:50.792 { 00:15:50.792 "name": "NewBaseBdev", 00:15:50.792 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:50.792 "is_configured": true, 00:15:50.792 "data_offset": 2048, 00:15:50.792 "data_size": 63488 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "name": "BaseBdev2", 00:15:50.792 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:50.792 "is_configured": true, 00:15:50.792 "data_offset": 2048, 00:15:50.792 "data_size": 63488 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "name": "BaseBdev3", 00:15:50.792 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:50.792 "is_configured": true, 00:15:50.792 "data_offset": 2048, 00:15:50.792 "data_size": 63488 00:15:50.792 }, 00:15:50.792 { 00:15:50.792 "name": "BaseBdev4", 00:15:50.792 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:50.792 "is_configured": true, 00:15:50.792 "data_offset": 2048, 00:15:50.792 "data_size": 63488 00:15:50.792 } 00:15:50.792 ] 00:15:50.792 } 00:15:50.792 } 00:15:50.792 }' 00:15:50.792 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.792 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:50.792 BaseBdev2 00:15:50.792 BaseBdev3 00:15:50.792 BaseBdev4' 00:15:50.792 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.792 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:50.792 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.049 "name": "NewBaseBdev", 00:15:51.049 "aliases": [ 00:15:51.049 "d600e9d0-48bc-11ef-a06c-59ddad71024c" 00:15:51.049 ], 00:15:51.049 "product_name": "Malloc disk", 00:15:51.049 "block_size": 512, 00:15:51.049 "num_blocks": 65536, 00:15:51.049 "uuid": "d600e9d0-48bc-11ef-a06c-59ddad71024c", 00:15:51.049 "assigned_rate_limits": { 00:15:51.049 "rw_ios_per_sec": 0, 00:15:51.049 "rw_mbytes_per_sec": 0, 00:15:51.049 "r_mbytes_per_sec": 0, 00:15:51.049 "w_mbytes_per_sec": 0 00:15:51.049 }, 00:15:51.049 "claimed": true, 00:15:51.049 "claim_type": "exclusive_write", 00:15:51.049 "zoned": false, 00:15:51.049 "supported_io_types": { 00:15:51.049 "read": true, 00:15:51.049 "write": true, 00:15:51.049 "unmap": true, 00:15:51.049 "flush": true, 00:15:51.049 "reset": true, 00:15:51.049 "nvme_admin": false, 00:15:51.049 "nvme_io": false, 00:15:51.049 "nvme_io_md": false, 00:15:51.049 "write_zeroes": true, 00:15:51.049 "zcopy": true, 00:15:51.049 "get_zone_info": false, 00:15:51.049 "zone_management": false, 00:15:51.049 "zone_append": false, 00:15:51.049 "compare": false, 00:15:51.049 "compare_and_write": false, 00:15:51.049 "abort": true, 00:15:51.049 "seek_hole": false, 00:15:51.049 "seek_data": false, 00:15:51.049 "copy": true, 00:15:51.049 "nvme_iov_md": false 00:15:51.049 }, 00:15:51.049 "memory_domains": [ 00:15:51.049 { 00:15:51.049 "dma_device_id": "system", 00:15:51.049 "dma_device_type": 1 00:15:51.049 }, 00:15:51.049 { 00:15:51.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.049 "dma_device_type": 2 00:15:51.049 } 00:15:51.049 ], 00:15:51.049 "driver_specific": {} 00:15:51.049 }' 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.049 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.050 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.050 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.050 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.050 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:51.050 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.307 "name": "BaseBdev2", 00:15:51.307 "aliases": [ 00:15:51.307 "d3732b09-48bc-11ef-a06c-59ddad71024c" 00:15:51.307 ], 00:15:51.307 "product_name": "Malloc disk", 00:15:51.307 "block_size": 512, 00:15:51.307 "num_blocks": 65536, 00:15:51.307 "uuid": "d3732b09-48bc-11ef-a06c-59ddad71024c", 00:15:51.307 "assigned_rate_limits": { 00:15:51.307 "rw_ios_per_sec": 0, 00:15:51.307 "rw_mbytes_per_sec": 0, 00:15:51.307 "r_mbytes_per_sec": 0, 00:15:51.307 "w_mbytes_per_sec": 0 00:15:51.307 }, 00:15:51.307 "claimed": true, 00:15:51.307 "claim_type": "exclusive_write", 00:15:51.307 "zoned": false, 00:15:51.307 "supported_io_types": { 00:15:51.307 "read": true, 00:15:51.307 "write": true, 00:15:51.307 "unmap": true, 00:15:51.307 "flush": true, 00:15:51.307 "reset": true, 00:15:51.307 "nvme_admin": false, 00:15:51.307 "nvme_io": false, 00:15:51.307 "nvme_io_md": false, 00:15:51.307 "write_zeroes": true, 00:15:51.307 "zcopy": true, 00:15:51.307 "get_zone_info": false, 00:15:51.307 "zone_management": false, 00:15:51.307 "zone_append": false, 00:15:51.307 "compare": false, 00:15:51.307 "compare_and_write": false, 00:15:51.307 "abort": true, 00:15:51.307 "seek_hole": false, 00:15:51.307 "seek_data": false, 00:15:51.307 "copy": true, 00:15:51.307 "nvme_iov_md": false 00:15:51.307 }, 00:15:51.307 "memory_domains": [ 00:15:51.307 { 00:15:51.307 "dma_device_id": "system", 00:15:51.307 "dma_device_type": 1 00:15:51.307 }, 00:15:51.307 { 00:15:51.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.307 "dma_device_type": 2 00:15:51.307 } 00:15:51.307 ], 00:15:51.307 "driver_specific": {} 00:15:51.307 }' 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:51.307 06:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.565 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.565 "name": "BaseBdev3", 00:15:51.565 "aliases": [ 00:15:51.565 "d3ea3246-48bc-11ef-a06c-59ddad71024c" 00:15:51.565 ], 00:15:51.565 "product_name": "Malloc disk", 00:15:51.565 "block_size": 512, 00:15:51.565 "num_blocks": 65536, 00:15:51.565 "uuid": "d3ea3246-48bc-11ef-a06c-59ddad71024c", 00:15:51.565 "assigned_rate_limits": { 00:15:51.565 "rw_ios_per_sec": 0, 00:15:51.565 "rw_mbytes_per_sec": 0, 00:15:51.565 "r_mbytes_per_sec": 0, 00:15:51.565 "w_mbytes_per_sec": 0 00:15:51.565 }, 00:15:51.565 "claimed": true, 00:15:51.565 "claim_type": "exclusive_write", 00:15:51.565 "zoned": false, 00:15:51.565 "supported_io_types": { 00:15:51.565 "read": true, 00:15:51.565 "write": true, 00:15:51.565 "unmap": true, 00:15:51.565 "flush": true, 00:15:51.565 "reset": true, 00:15:51.565 "nvme_admin": false, 00:15:51.565 "nvme_io": false, 00:15:51.565 "nvme_io_md": false, 00:15:51.565 "write_zeroes": true, 00:15:51.565 "zcopy": true, 00:15:51.565 "get_zone_info": false, 00:15:51.565 "zone_management": false, 00:15:51.565 "zone_append": false, 00:15:51.565 "compare": false, 00:15:51.565 "compare_and_write": false, 00:15:51.565 "abort": true, 00:15:51.565 "seek_hole": false, 00:15:51.565 "seek_data": false, 00:15:51.565 "copy": true, 00:15:51.565 "nvme_iov_md": false 00:15:51.565 }, 00:15:51.565 "memory_domains": [ 00:15:51.565 { 00:15:51.565 "dma_device_id": "system", 00:15:51.565 "dma_device_type": 1 00:15:51.565 }, 00:15:51.565 { 00:15:51.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.565 "dma_device_type": 2 00:15:51.565 } 00:15:51.565 ], 00:15:51.565 "driver_specific": {} 00:15:51.565 }' 00:15:51.565 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.565 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:51.823 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.082 "name": "BaseBdev4", 00:15:52.082 "aliases": [ 00:15:52.082 "d45d909a-48bc-11ef-a06c-59ddad71024c" 00:15:52.082 ], 00:15:52.082 "product_name": "Malloc disk", 00:15:52.082 "block_size": 512, 00:15:52.082 "num_blocks": 65536, 00:15:52.082 "uuid": "d45d909a-48bc-11ef-a06c-59ddad71024c", 00:15:52.082 "assigned_rate_limits": { 00:15:52.082 "rw_ios_per_sec": 0, 00:15:52.082 "rw_mbytes_per_sec": 0, 00:15:52.082 "r_mbytes_per_sec": 0, 00:15:52.082 "w_mbytes_per_sec": 0 00:15:52.082 }, 00:15:52.082 "claimed": true, 00:15:52.082 "claim_type": "exclusive_write", 00:15:52.082 "zoned": false, 00:15:52.082 "supported_io_types": { 00:15:52.082 "read": true, 00:15:52.082 "write": true, 00:15:52.082 "unmap": true, 00:15:52.082 "flush": true, 00:15:52.082 "reset": true, 00:15:52.082 "nvme_admin": false, 00:15:52.082 "nvme_io": false, 00:15:52.082 "nvme_io_md": false, 00:15:52.082 "write_zeroes": true, 00:15:52.082 "zcopy": true, 00:15:52.082 "get_zone_info": false, 00:15:52.082 "zone_management": false, 00:15:52.082 "zone_append": false, 00:15:52.082 "compare": false, 00:15:52.082 "compare_and_write": false, 00:15:52.082 "abort": true, 00:15:52.082 "seek_hole": false, 00:15:52.082 "seek_data": false, 00:15:52.082 "copy": true, 00:15:52.082 "nvme_iov_md": false 00:15:52.082 }, 00:15:52.082 "memory_domains": [ 00:15:52.082 { 00:15:52.082 "dma_device_id": "system", 00:15:52.082 "dma_device_type": 1 00:15:52.082 }, 00:15:52.082 { 00:15:52.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.082 "dma_device_type": 2 00:15:52.082 } 00:15:52.082 ], 00:15:52.082 "driver_specific": {} 00:15:52.082 }' 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.082 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.340 [2024-07-23 06:29:04.683254] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.340 [2024-07-23 06:29:04.683281] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.340 [2024-07-23 06:29:04.683306] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.340 [2024-07-23 06:29:04.683323] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.340 [2024-07-23 06:29:04.683327] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1539a34f00 name Existed_Raid, state offline 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59238 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 59238 ']' 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 59238 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 59238 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:52.340 killing process with pid 59238 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59238' 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 59238 00:15:52.340 [2024-07-23 06:29:04.710472] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.340 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 59238 00:15:52.340 [2024-07-23 06:29:04.733475] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.610 06:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:52.611 00:15:52.611 real 0m27.922s 00:15:52.611 user 0m51.066s 00:15:52.611 sys 0m3.917s 00:15:52.611 ************************************ 00:15:52.611 END TEST raid_state_function_test_sb 00:15:52.611 ************************************ 00:15:52.611 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.611 06:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.611 06:29:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:52.611 06:29:04 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:52.611 06:29:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:52.611 06:29:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.611 06:29:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.611 ************************************ 00:15:52.611 START TEST raid_superblock_test 00:15:52.611 ************************************ 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=60060 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 60060 /var/tmp/spdk-raid.sock 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 60060 ']' 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.611 06:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.611 [2024-07-23 06:29:04.964611] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:52.611 [2024-07-23 06:29:04.964783] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:53.192 EAL: TSC is not safe to use in SMP mode 00:15:53.192 EAL: TSC is not invariant 00:15:53.192 [2024-07-23 06:29:05.500066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.192 [2024-07-23 06:29:05.585588] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:53.192 [2024-07-23 06:29:05.587670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.192 [2024-07-23 06:29:05.588452] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.192 [2024-07-23 06:29:05.588468] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.758 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:54.016 malloc1 00:15:54.016 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.275 [2024-07-23 06:29:06.624480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.275 [2024-07-23 06:29:06.624548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.275 [2024-07-23 06:29:06.624562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac34780 00:15:54.275 [2024-07-23 06:29:06.624571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.275 [2024-07-23 06:29:06.625476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.275 [2024-07-23 06:29:06.625503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.275 pt1 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.275 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:54.533 malloc2 00:15:54.533 06:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.809 [2024-07-23 06:29:07.192502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.809 [2024-07-23 06:29:07.192557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.809 [2024-07-23 06:29:07.192570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac34c80 00:15:54.809 [2024-07-23 06:29:07.192579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.809 [2024-07-23 06:29:07.193236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.809 [2024-07-23 06:29:07.193262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.809 pt2 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.809 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:55.084 malloc3 00:15:55.084 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.342 [2024-07-23 06:29:07.748507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.342 [2024-07-23 06:29:07.748568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.342 [2024-07-23 06:29:07.748581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac35180 00:15:55.342 [2024-07-23 06:29:07.748590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.342 [2024-07-23 06:29:07.749246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.342 [2024-07-23 06:29:07.749272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.342 pt3 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.342 06:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:55.599 malloc4 00:15:55.599 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:55.857 [2024-07-23 06:29:08.280527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:55.857 [2024-07-23 06:29:08.280582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.857 [2024-07-23 06:29:08.280610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac35680 00:15:55.857 [2024-07-23 06:29:08.280626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.857 [2024-07-23 06:29:08.281316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.857 [2024-07-23 06:29:08.281341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:55.857 pt4 00:15:55.857 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:55.857 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:55.857 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:56.115 [2024-07-23 06:29:08.516631] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.115 [2024-07-23 06:29:08.517246] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.115 [2024-07-23 06:29:08.517267] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:56.115 [2024-07-23 06:29:08.517279] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:56.115 [2024-07-23 06:29:08.517347] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2df80ac35900 00:15:56.115 [2024-07-23 06:29:08.517354] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:56.115 [2024-07-23 06:29:08.517389] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2df80ac97e20 00:15:56.115 [2024-07-23 06:29:08.517465] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2df80ac35900 00:15:56.115 [2024-07-23 06:29:08.517470] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2df80ac35900 00:15:56.115 [2024-07-23 06:29:08.517498] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.115 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.373 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.373 "name": "raid_bdev1", 00:15:56.373 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:15:56.373 "strip_size_kb": 64, 00:15:56.373 "state": "online", 00:15:56.373 "raid_level": "raid0", 00:15:56.373 "superblock": true, 00:15:56.373 "num_base_bdevs": 4, 00:15:56.373 "num_base_bdevs_discovered": 4, 00:15:56.373 "num_base_bdevs_operational": 4, 00:15:56.373 "base_bdevs_list": [ 00:15:56.373 { 00:15:56.373 "name": "pt1", 00:15:56.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.373 "is_configured": true, 00:15:56.373 "data_offset": 2048, 00:15:56.373 "data_size": 63488 00:15:56.373 }, 00:15:56.373 { 00:15:56.373 "name": "pt2", 00:15:56.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.374 "is_configured": true, 00:15:56.374 "data_offset": 2048, 00:15:56.374 "data_size": 63488 00:15:56.374 }, 00:15:56.374 { 00:15:56.374 "name": "pt3", 00:15:56.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.374 "is_configured": true, 00:15:56.374 "data_offset": 2048, 00:15:56.374 "data_size": 63488 00:15:56.374 }, 00:15:56.374 { 00:15:56.374 "name": "pt4", 00:15:56.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:56.374 "is_configured": true, 00:15:56.374 "data_offset": 2048, 00:15:56.374 "data_size": 63488 00:15:56.374 } 00:15:56.374 ] 00:15:56.374 }' 00:15:56.374 06:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.374 06:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.631 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:56.889 [2024-07-23 06:29:09.372711] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.889 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:56.889 "name": "raid_bdev1", 00:15:56.889 "aliases": [ 00:15:56.889 "ddde669a-48bc-11ef-a06c-59ddad71024c" 00:15:56.889 ], 00:15:56.889 "product_name": "Raid Volume", 00:15:56.889 "block_size": 512, 00:15:56.889 "num_blocks": 253952, 00:15:56.889 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:15:56.889 "assigned_rate_limits": { 00:15:56.889 "rw_ios_per_sec": 0, 00:15:56.889 "rw_mbytes_per_sec": 0, 00:15:56.889 "r_mbytes_per_sec": 0, 00:15:56.889 "w_mbytes_per_sec": 0 00:15:56.889 }, 00:15:56.889 "claimed": false, 00:15:56.889 "zoned": false, 00:15:56.889 "supported_io_types": { 00:15:56.889 "read": true, 00:15:56.889 "write": true, 00:15:56.889 "unmap": true, 00:15:56.889 "flush": true, 00:15:56.889 "reset": true, 00:15:56.889 "nvme_admin": false, 00:15:56.889 "nvme_io": false, 00:15:56.889 "nvme_io_md": false, 00:15:56.889 "write_zeroes": true, 00:15:56.889 "zcopy": false, 00:15:56.889 "get_zone_info": false, 00:15:56.889 "zone_management": false, 00:15:56.889 "zone_append": false, 00:15:56.889 "compare": false, 00:15:56.889 "compare_and_write": false, 00:15:56.889 "abort": false, 00:15:56.889 "seek_hole": false, 00:15:56.889 "seek_data": false, 00:15:56.889 "copy": false, 00:15:56.889 "nvme_iov_md": false 00:15:56.889 }, 00:15:56.889 "memory_domains": [ 00:15:56.889 { 00:15:56.889 "dma_device_id": "system", 00:15:56.889 "dma_device_type": 1 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.889 "dma_device_type": 2 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "system", 00:15:56.889 "dma_device_type": 1 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.889 "dma_device_type": 2 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "system", 00:15:56.889 "dma_device_type": 1 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.889 "dma_device_type": 2 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "system", 00:15:56.889 "dma_device_type": 1 00:15:56.889 }, 00:15:56.889 { 00:15:56.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.889 "dma_device_type": 2 00:15:56.889 } 00:15:56.889 ], 00:15:56.889 "driver_specific": { 00:15:56.889 "raid": { 00:15:56.889 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:15:56.889 "strip_size_kb": 64, 00:15:56.889 "state": "online", 00:15:56.889 "raid_level": "raid0", 00:15:56.889 "superblock": true, 00:15:56.890 "num_base_bdevs": 4, 00:15:56.890 "num_base_bdevs_discovered": 4, 00:15:56.890 "num_base_bdevs_operational": 4, 00:15:56.890 "base_bdevs_list": [ 00:15:56.890 { 00:15:56.890 "name": "pt1", 00:15:56.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.890 "is_configured": true, 00:15:56.890 "data_offset": 2048, 00:15:56.890 "data_size": 63488 00:15:56.890 }, 00:15:56.890 { 00:15:56.890 "name": "pt2", 00:15:56.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.890 "is_configured": true, 00:15:56.890 "data_offset": 2048, 00:15:56.890 "data_size": 63488 00:15:56.890 }, 00:15:56.890 { 00:15:56.890 "name": "pt3", 00:15:56.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.890 "is_configured": true, 00:15:56.890 "data_offset": 2048, 00:15:56.890 "data_size": 63488 00:15:56.890 }, 00:15:56.890 { 00:15:56.890 "name": "pt4", 00:15:56.890 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:56.890 "is_configured": true, 00:15:56.890 "data_offset": 2048, 00:15:56.890 "data_size": 63488 00:15:56.890 } 00:15:56.890 ] 00:15:56.890 } 00:15:56.890 } 00:15:56.890 }' 00:15:56.890 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.890 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:56.890 pt2 00:15:56.890 pt3 00:15:56.890 pt4' 00:15:56.890 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.890 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:56.890 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.148 "name": "pt1", 00:15:57.148 "aliases": [ 00:15:57.148 "00000000-0000-0000-0000-000000000001" 00:15:57.148 ], 00:15:57.148 "product_name": "passthru", 00:15:57.148 "block_size": 512, 00:15:57.148 "num_blocks": 65536, 00:15:57.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.148 "assigned_rate_limits": { 00:15:57.148 "rw_ios_per_sec": 0, 00:15:57.148 "rw_mbytes_per_sec": 0, 00:15:57.148 "r_mbytes_per_sec": 0, 00:15:57.148 "w_mbytes_per_sec": 0 00:15:57.148 }, 00:15:57.148 "claimed": true, 00:15:57.148 "claim_type": "exclusive_write", 00:15:57.148 "zoned": false, 00:15:57.148 "supported_io_types": { 00:15:57.148 "read": true, 00:15:57.148 "write": true, 00:15:57.148 "unmap": true, 00:15:57.148 "flush": true, 00:15:57.148 "reset": true, 00:15:57.148 "nvme_admin": false, 00:15:57.148 "nvme_io": false, 00:15:57.148 "nvme_io_md": false, 00:15:57.148 "write_zeroes": true, 00:15:57.148 "zcopy": true, 00:15:57.148 "get_zone_info": false, 00:15:57.148 "zone_management": false, 00:15:57.148 "zone_append": false, 00:15:57.148 "compare": false, 00:15:57.148 "compare_and_write": false, 00:15:57.148 "abort": true, 00:15:57.148 "seek_hole": false, 00:15:57.148 "seek_data": false, 00:15:57.148 "copy": true, 00:15:57.148 "nvme_iov_md": false 00:15:57.148 }, 00:15:57.148 "memory_domains": [ 00:15:57.148 { 00:15:57.148 "dma_device_id": "system", 00:15:57.148 "dma_device_type": 1 00:15:57.148 }, 00:15:57.148 { 00:15:57.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.148 "dma_device_type": 2 00:15:57.148 } 00:15:57.148 ], 00:15:57.148 "driver_specific": { 00:15:57.148 "passthru": { 00:15:57.148 "name": "pt1", 00:15:57.148 "base_bdev_name": "malloc1" 00:15:57.148 } 00:15:57.148 } 00:15:57.148 }' 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.148 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:57.409 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.667 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.667 "name": "pt2", 00:15:57.667 "aliases": [ 00:15:57.667 "00000000-0000-0000-0000-000000000002" 00:15:57.667 ], 00:15:57.667 "product_name": "passthru", 00:15:57.667 "block_size": 512, 00:15:57.667 "num_blocks": 65536, 00:15:57.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.667 "assigned_rate_limits": { 00:15:57.667 "rw_ios_per_sec": 0, 00:15:57.667 "rw_mbytes_per_sec": 0, 00:15:57.667 "r_mbytes_per_sec": 0, 00:15:57.667 "w_mbytes_per_sec": 0 00:15:57.667 }, 00:15:57.667 "claimed": true, 00:15:57.667 "claim_type": "exclusive_write", 00:15:57.667 "zoned": false, 00:15:57.667 "supported_io_types": { 00:15:57.667 "read": true, 00:15:57.667 "write": true, 00:15:57.667 "unmap": true, 00:15:57.667 "flush": true, 00:15:57.667 "reset": true, 00:15:57.667 "nvme_admin": false, 00:15:57.667 "nvme_io": false, 00:15:57.667 "nvme_io_md": false, 00:15:57.667 "write_zeroes": true, 00:15:57.667 "zcopy": true, 00:15:57.667 "get_zone_info": false, 00:15:57.667 "zone_management": false, 00:15:57.667 "zone_append": false, 00:15:57.667 "compare": false, 00:15:57.667 "compare_and_write": false, 00:15:57.667 "abort": true, 00:15:57.667 "seek_hole": false, 00:15:57.667 "seek_data": false, 00:15:57.667 "copy": true, 00:15:57.667 "nvme_iov_md": false 00:15:57.667 }, 00:15:57.667 "memory_domains": [ 00:15:57.667 { 00:15:57.667 "dma_device_id": "system", 00:15:57.667 "dma_device_type": 1 00:15:57.667 }, 00:15:57.667 { 00:15:57.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.667 "dma_device_type": 2 00:15:57.667 } 00:15:57.667 ], 00:15:57.667 "driver_specific": { 00:15:57.667 "passthru": { 00:15:57.667 "name": "pt2", 00:15:57.667 "base_bdev_name": "malloc2" 00:15:57.667 } 00:15:57.667 } 00:15:57.667 }' 00:15:57.667 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.667 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.667 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.667 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.667 06:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:57.667 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.925 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.925 "name": "pt3", 00:15:57.925 "aliases": [ 00:15:57.925 "00000000-0000-0000-0000-000000000003" 00:15:57.925 ], 00:15:57.925 "product_name": "passthru", 00:15:57.925 "block_size": 512, 00:15:57.925 "num_blocks": 65536, 00:15:57.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.925 "assigned_rate_limits": { 00:15:57.925 "rw_ios_per_sec": 0, 00:15:57.925 "rw_mbytes_per_sec": 0, 00:15:57.925 "r_mbytes_per_sec": 0, 00:15:57.925 "w_mbytes_per_sec": 0 00:15:57.925 }, 00:15:57.925 "claimed": true, 00:15:57.925 "claim_type": "exclusive_write", 00:15:57.925 "zoned": false, 00:15:57.925 "supported_io_types": { 00:15:57.925 "read": true, 00:15:57.925 "write": true, 00:15:57.925 "unmap": true, 00:15:57.925 "flush": true, 00:15:57.925 "reset": true, 00:15:57.925 "nvme_admin": false, 00:15:57.925 "nvme_io": false, 00:15:57.925 "nvme_io_md": false, 00:15:57.926 "write_zeroes": true, 00:15:57.926 "zcopy": true, 00:15:57.926 "get_zone_info": false, 00:15:57.926 "zone_management": false, 00:15:57.926 "zone_append": false, 00:15:57.926 "compare": false, 00:15:57.926 "compare_and_write": false, 00:15:57.926 "abort": true, 00:15:57.926 "seek_hole": false, 00:15:57.926 "seek_data": false, 00:15:57.926 "copy": true, 00:15:57.926 "nvme_iov_md": false 00:15:57.926 }, 00:15:57.926 "memory_domains": [ 00:15:57.926 { 00:15:57.926 "dma_device_id": "system", 00:15:57.926 "dma_device_type": 1 00:15:57.926 }, 00:15:57.926 { 00:15:57.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.926 "dma_device_type": 2 00:15:57.926 } 00:15:57.926 ], 00:15:57.926 "driver_specific": { 00:15:57.926 "passthru": { 00:15:57.926 "name": "pt3", 00:15:57.926 "base_bdev_name": "malloc3" 00:15:57.926 } 00:15:57.926 } 00:15:57.926 }' 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:57.926 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.184 "name": "pt4", 00:15:58.184 "aliases": [ 00:15:58.184 "00000000-0000-0000-0000-000000000004" 00:15:58.184 ], 00:15:58.184 "product_name": "passthru", 00:15:58.184 "block_size": 512, 00:15:58.184 "num_blocks": 65536, 00:15:58.184 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.184 "assigned_rate_limits": { 00:15:58.184 "rw_ios_per_sec": 0, 00:15:58.184 "rw_mbytes_per_sec": 0, 00:15:58.184 "r_mbytes_per_sec": 0, 00:15:58.184 "w_mbytes_per_sec": 0 00:15:58.184 }, 00:15:58.184 "claimed": true, 00:15:58.184 "claim_type": "exclusive_write", 00:15:58.184 "zoned": false, 00:15:58.184 "supported_io_types": { 00:15:58.184 "read": true, 00:15:58.184 "write": true, 00:15:58.184 "unmap": true, 00:15:58.184 "flush": true, 00:15:58.184 "reset": true, 00:15:58.184 "nvme_admin": false, 00:15:58.184 "nvme_io": false, 00:15:58.184 "nvme_io_md": false, 00:15:58.184 "write_zeroes": true, 00:15:58.184 "zcopy": true, 00:15:58.184 "get_zone_info": false, 00:15:58.184 "zone_management": false, 00:15:58.184 "zone_append": false, 00:15:58.184 "compare": false, 00:15:58.184 "compare_and_write": false, 00:15:58.184 "abort": true, 00:15:58.184 "seek_hole": false, 00:15:58.184 "seek_data": false, 00:15:58.184 "copy": true, 00:15:58.184 "nvme_iov_md": false 00:15:58.184 }, 00:15:58.184 "memory_domains": [ 00:15:58.184 { 00:15:58.184 "dma_device_id": "system", 00:15:58.184 "dma_device_type": 1 00:15:58.184 }, 00:15:58.184 { 00:15:58.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.184 "dma_device_type": 2 00:15:58.184 } 00:15:58.184 ], 00:15:58.184 "driver_specific": { 00:15:58.184 "passthru": { 00:15:58.184 "name": "pt4", 00:15:58.184 "base_bdev_name": "malloc4" 00:15:58.184 } 00:15:58.184 } 00:15:58.184 }' 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:58.184 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:58.442 [2024-07-23 06:29:10.948780] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.700 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ddde669a-48bc-11ef-a06c-59ddad71024c 00:15:58.701 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ddde669a-48bc-11ef-a06c-59ddad71024c ']' 00:15:58.701 06:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.701 [2024-07-23 06:29:11.192731] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.701 [2024-07-23 06:29:11.192757] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.701 [2024-07-23 06:29:11.192781] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.701 [2024-07-23 06:29:11.192798] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.701 [2024-07-23 06:29:11.192802] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2df80ac35900 name raid_bdev1, state offline 00:15:58.701 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:58.701 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.959 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:58.959 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:58.959 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.959 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:59.221 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.221 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:59.478 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.478 06:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:59.736 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.736 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:59.994 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:59.994 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:00.253 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:00.512 [2024-07-23 06:29:12.952790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:00.512 [2024-07-23 06:29:12.953371] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:00.512 [2024-07-23 06:29:12.953392] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:00.512 [2024-07-23 06:29:12.953401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:00.512 [2024-07-23 06:29:12.953417] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:00.512 [2024-07-23 06:29:12.953454] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:00.512 [2024-07-23 06:29:12.953467] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:00.512 [2024-07-23 06:29:12.953477] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:00.512 [2024-07-23 06:29:12.953486] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.512 [2024-07-23 06:29:12.953491] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2df80ac35680 name raid_bdev1, state configuring 00:16:00.512 request: 00:16:00.512 { 00:16:00.512 "name": "raid_bdev1", 00:16:00.512 "raid_level": "raid0", 00:16:00.512 "base_bdevs": [ 00:16:00.512 "malloc1", 00:16:00.512 "malloc2", 00:16:00.512 "malloc3", 00:16:00.512 "malloc4" 00:16:00.512 ], 00:16:00.512 "strip_size_kb": 64, 00:16:00.512 "superblock": false, 00:16:00.512 "method": "bdev_raid_create", 00:16:00.512 "req_id": 1 00:16:00.512 } 00:16:00.512 Got JSON-RPC error response 00:16:00.512 response: 00:16:00.512 { 00:16:00.512 "code": -17, 00:16:00.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:00.512 } 00:16:00.512 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:00.512 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.512 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.512 06:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.512 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.512 06:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:00.771 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:00.771 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:00.771 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.029 [2024-07-23 06:29:13.416796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.029 [2024-07-23 06:29:13.416855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.029 [2024-07-23 06:29:13.416867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac35180 00:16:01.029 [2024-07-23 06:29:13.416876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.029 [2024-07-23 06:29:13.417525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.029 [2024-07-23 06:29:13.417550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.029 [2024-07-23 06:29:13.417576] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.029 [2024-07-23 06:29:13.417589] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.029 pt1 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.029 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.288 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.288 "name": "raid_bdev1", 00:16:01.288 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:16:01.288 "strip_size_kb": 64, 00:16:01.288 "state": "configuring", 00:16:01.288 "raid_level": "raid0", 00:16:01.288 "superblock": true, 00:16:01.288 "num_base_bdevs": 4, 00:16:01.288 "num_base_bdevs_discovered": 1, 00:16:01.288 "num_base_bdevs_operational": 4, 00:16:01.288 "base_bdevs_list": [ 00:16:01.288 { 00:16:01.288 "name": "pt1", 00:16:01.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.288 "is_configured": true, 00:16:01.288 "data_offset": 2048, 00:16:01.288 "data_size": 63488 00:16:01.288 }, 00:16:01.288 { 00:16:01.288 "name": null, 00:16:01.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.288 "is_configured": false, 00:16:01.288 "data_offset": 2048, 00:16:01.288 "data_size": 63488 00:16:01.288 }, 00:16:01.288 { 00:16:01.288 "name": null, 00:16:01.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.288 "is_configured": false, 00:16:01.288 "data_offset": 2048, 00:16:01.288 "data_size": 63488 00:16:01.288 }, 00:16:01.288 { 00:16:01.288 "name": null, 00:16:01.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.288 "is_configured": false, 00:16:01.288 "data_offset": 2048, 00:16:01.288 "data_size": 63488 00:16:01.288 } 00:16:01.288 ] 00:16:01.288 }' 00:16:01.288 06:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.288 06:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.548 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:01.548 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.808 [2024-07-23 06:29:14.232821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.808 [2024-07-23 06:29:14.232882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.808 [2024-07-23 06:29:14.232895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac34780 00:16:01.808 [2024-07-23 06:29:14.232904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.808 [2024-07-23 06:29:14.233020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.808 [2024-07-23 06:29:14.233032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.808 [2024-07-23 06:29:14.233056] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.808 [2024-07-23 06:29:14.233066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.808 pt2 00:16:01.808 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:02.066 [2024-07-23 06:29:14.504830] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.066 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.325 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.325 "name": "raid_bdev1", 00:16:02.325 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:16:02.325 "strip_size_kb": 64, 00:16:02.325 "state": "configuring", 00:16:02.325 "raid_level": "raid0", 00:16:02.325 "superblock": true, 00:16:02.325 "num_base_bdevs": 4, 00:16:02.325 "num_base_bdevs_discovered": 1, 00:16:02.325 "num_base_bdevs_operational": 4, 00:16:02.325 "base_bdevs_list": [ 00:16:02.325 { 00:16:02.325 "name": "pt1", 00:16:02.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.325 "is_configured": true, 00:16:02.325 "data_offset": 2048, 00:16:02.325 "data_size": 63488 00:16:02.325 }, 00:16:02.325 { 00:16:02.325 "name": null, 00:16:02.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.325 "is_configured": false, 00:16:02.325 "data_offset": 2048, 00:16:02.325 "data_size": 63488 00:16:02.325 }, 00:16:02.325 { 00:16:02.325 "name": null, 00:16:02.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.325 "is_configured": false, 00:16:02.325 "data_offset": 2048, 00:16:02.325 "data_size": 63488 00:16:02.325 }, 00:16:02.325 { 00:16:02.325 "name": null, 00:16:02.325 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.325 "is_configured": false, 00:16:02.325 "data_offset": 2048, 00:16:02.325 "data_size": 63488 00:16:02.325 } 00:16:02.325 ] 00:16:02.325 }' 00:16:02.325 06:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.325 06:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.890 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:02.890 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:02.890 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.890 [2024-07-23 06:29:15.400852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.890 [2024-07-23 06:29:15.400912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.890 [2024-07-23 06:29:15.400925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac34780 00:16:02.890 [2024-07-23 06:29:15.400934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.890 [2024-07-23 06:29:15.401051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.890 [2024-07-23 06:29:15.401063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.890 [2024-07-23 06:29:15.401092] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:02.890 [2024-07-23 06:29:15.401107] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.890 pt2 00:16:03.148 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:03.148 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:03.148 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:03.407 [2024-07-23 06:29:15.696880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:03.407 [2024-07-23 06:29:15.696957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.407 [2024-07-23 06:29:15.696970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac35b80 00:16:03.407 [2024-07-23 06:29:15.696979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.407 [2024-07-23 06:29:15.697097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.407 [2024-07-23 06:29:15.697109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:03.407 [2024-07-23 06:29:15.697132] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:03.407 [2024-07-23 06:29:15.697142] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:03.407 pt3 00:16:03.407 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:03.407 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:03.407 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:03.666 [2024-07-23 06:29:15.940874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:03.666 [2024-07-23 06:29:15.940923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.666 [2024-07-23 06:29:15.940935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2df80ac35900 00:16:03.666 [2024-07-23 06:29:15.940943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.666 [2024-07-23 06:29:15.941056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.666 [2024-07-23 06:29:15.941067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:03.666 [2024-07-23 06:29:15.941091] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:03.666 [2024-07-23 06:29:15.941100] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:03.666 [2024-07-23 06:29:15.941132] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2df80ac34c80 00:16:03.666 [2024-07-23 06:29:15.941138] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:03.666 [2024-07-23 06:29:15.941169] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2df80ac97e20 00:16:03.666 [2024-07-23 06:29:15.941224] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2df80ac34c80 00:16:03.666 [2024-07-23 06:29:15.941230] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2df80ac34c80 00:16:03.666 [2024-07-23 06:29:15.941252] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.666 pt4 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.666 06:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.925 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.925 "name": "raid_bdev1", 00:16:03.925 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:16:03.925 "strip_size_kb": 64, 00:16:03.925 "state": "online", 00:16:03.925 "raid_level": "raid0", 00:16:03.925 "superblock": true, 00:16:03.925 "num_base_bdevs": 4, 00:16:03.925 "num_base_bdevs_discovered": 4, 00:16:03.925 "num_base_bdevs_operational": 4, 00:16:03.925 "base_bdevs_list": [ 00:16:03.925 { 00:16:03.925 "name": "pt1", 00:16:03.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.925 "is_configured": true, 00:16:03.925 "data_offset": 2048, 00:16:03.925 "data_size": 63488 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "name": "pt2", 00:16:03.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.925 "is_configured": true, 00:16:03.925 "data_offset": 2048, 00:16:03.925 "data_size": 63488 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "name": "pt3", 00:16:03.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.925 "is_configured": true, 00:16:03.925 "data_offset": 2048, 00:16:03.925 "data_size": 63488 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "name": "pt4", 00:16:03.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.925 "is_configured": true, 00:16:03.925 "data_offset": 2048, 00:16:03.925 "data_size": 63488 00:16:03.925 } 00:16:03.925 ] 00:16:03.925 }' 00:16:03.925 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.925 06:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:04.234 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:04.234 [2024-07-23 06:29:16.744941] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.492 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:04.492 "name": "raid_bdev1", 00:16:04.492 "aliases": [ 00:16:04.492 "ddde669a-48bc-11ef-a06c-59ddad71024c" 00:16:04.492 ], 00:16:04.492 "product_name": "Raid Volume", 00:16:04.492 "block_size": 512, 00:16:04.492 "num_blocks": 253952, 00:16:04.492 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:16:04.492 "assigned_rate_limits": { 00:16:04.492 "rw_ios_per_sec": 0, 00:16:04.492 "rw_mbytes_per_sec": 0, 00:16:04.492 "r_mbytes_per_sec": 0, 00:16:04.492 "w_mbytes_per_sec": 0 00:16:04.492 }, 00:16:04.492 "claimed": false, 00:16:04.492 "zoned": false, 00:16:04.492 "supported_io_types": { 00:16:04.492 "read": true, 00:16:04.492 "write": true, 00:16:04.492 "unmap": true, 00:16:04.492 "flush": true, 00:16:04.492 "reset": true, 00:16:04.492 "nvme_admin": false, 00:16:04.492 "nvme_io": false, 00:16:04.492 "nvme_io_md": false, 00:16:04.492 "write_zeroes": true, 00:16:04.492 "zcopy": false, 00:16:04.492 "get_zone_info": false, 00:16:04.492 "zone_management": false, 00:16:04.492 "zone_append": false, 00:16:04.492 "compare": false, 00:16:04.492 "compare_and_write": false, 00:16:04.492 "abort": false, 00:16:04.492 "seek_hole": false, 00:16:04.492 "seek_data": false, 00:16:04.492 "copy": false, 00:16:04.492 "nvme_iov_md": false 00:16:04.492 }, 00:16:04.492 "memory_domains": [ 00:16:04.492 { 00:16:04.492 "dma_device_id": "system", 00:16:04.492 "dma_device_type": 1 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.492 "dma_device_type": 2 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "system", 00:16:04.492 "dma_device_type": 1 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.492 "dma_device_type": 2 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "system", 00:16:04.492 "dma_device_type": 1 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.492 "dma_device_type": 2 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "system", 00:16:04.492 "dma_device_type": 1 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.492 "dma_device_type": 2 00:16:04.492 } 00:16:04.492 ], 00:16:04.492 "driver_specific": { 00:16:04.492 "raid": { 00:16:04.492 "uuid": "ddde669a-48bc-11ef-a06c-59ddad71024c", 00:16:04.492 "strip_size_kb": 64, 00:16:04.492 "state": "online", 00:16:04.492 "raid_level": "raid0", 00:16:04.492 "superblock": true, 00:16:04.492 "num_base_bdevs": 4, 00:16:04.492 "num_base_bdevs_discovered": 4, 00:16:04.492 "num_base_bdevs_operational": 4, 00:16:04.492 "base_bdevs_list": [ 00:16:04.492 { 00:16:04.492 "name": "pt1", 00:16:04.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.492 "is_configured": true, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "name": "pt2", 00:16:04.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.492 "is_configured": true, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "name": "pt3", 00:16:04.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.492 "is_configured": true, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "name": "pt4", 00:16:04.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.492 "is_configured": true, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 } 00:16:04.492 ] 00:16:04.492 } 00:16:04.492 } 00:16:04.492 }' 00:16:04.492 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.492 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:04.492 pt2 00:16:04.492 pt3 00:16:04.492 pt4' 00:16:04.492 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.492 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:04.492 06:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.492 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:04.492 "name": "pt1", 00:16:04.492 "aliases": [ 00:16:04.492 "00000000-0000-0000-0000-000000000001" 00:16:04.493 ], 00:16:04.493 "product_name": "passthru", 00:16:04.493 "block_size": 512, 00:16:04.493 "num_blocks": 65536, 00:16:04.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.493 "assigned_rate_limits": { 00:16:04.493 "rw_ios_per_sec": 0, 00:16:04.493 "rw_mbytes_per_sec": 0, 00:16:04.493 "r_mbytes_per_sec": 0, 00:16:04.493 "w_mbytes_per_sec": 0 00:16:04.493 }, 00:16:04.493 "claimed": true, 00:16:04.493 "claim_type": "exclusive_write", 00:16:04.493 "zoned": false, 00:16:04.493 "supported_io_types": { 00:16:04.493 "read": true, 00:16:04.493 "write": true, 00:16:04.493 "unmap": true, 00:16:04.493 "flush": true, 00:16:04.493 "reset": true, 00:16:04.493 "nvme_admin": false, 00:16:04.493 "nvme_io": false, 00:16:04.493 "nvme_io_md": false, 00:16:04.493 "write_zeroes": true, 00:16:04.493 "zcopy": true, 00:16:04.493 "get_zone_info": false, 00:16:04.493 "zone_management": false, 00:16:04.493 "zone_append": false, 00:16:04.493 "compare": false, 00:16:04.493 "compare_and_write": false, 00:16:04.493 "abort": true, 00:16:04.493 "seek_hole": false, 00:16:04.493 "seek_data": false, 00:16:04.493 "copy": true, 00:16:04.493 "nvme_iov_md": false 00:16:04.493 }, 00:16:04.493 "memory_domains": [ 00:16:04.493 { 00:16:04.493 "dma_device_id": "system", 00:16:04.493 "dma_device_type": 1 00:16:04.493 }, 00:16:04.493 { 00:16:04.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.493 "dma_device_type": 2 00:16:04.493 } 00:16:04.493 ], 00:16:04.493 "driver_specific": { 00:16:04.493 "passthru": { 00:16:04.493 "name": "pt1", 00:16:04.493 "base_bdev_name": "malloc1" 00:16:04.493 } 00:16:04.493 } 00:16:04.493 }' 00:16:04.493 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.752 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:05.009 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.009 "name": "pt2", 00:16:05.009 "aliases": [ 00:16:05.009 "00000000-0000-0000-0000-000000000002" 00:16:05.009 ], 00:16:05.009 "product_name": "passthru", 00:16:05.009 "block_size": 512, 00:16:05.009 "num_blocks": 65536, 00:16:05.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.010 "assigned_rate_limits": { 00:16:05.010 "rw_ios_per_sec": 0, 00:16:05.010 "rw_mbytes_per_sec": 0, 00:16:05.010 "r_mbytes_per_sec": 0, 00:16:05.010 "w_mbytes_per_sec": 0 00:16:05.010 }, 00:16:05.010 "claimed": true, 00:16:05.010 "claim_type": "exclusive_write", 00:16:05.010 "zoned": false, 00:16:05.010 "supported_io_types": { 00:16:05.010 "read": true, 00:16:05.010 "write": true, 00:16:05.010 "unmap": true, 00:16:05.010 "flush": true, 00:16:05.010 "reset": true, 00:16:05.010 "nvme_admin": false, 00:16:05.010 "nvme_io": false, 00:16:05.010 "nvme_io_md": false, 00:16:05.010 "write_zeroes": true, 00:16:05.010 "zcopy": true, 00:16:05.010 "get_zone_info": false, 00:16:05.010 "zone_management": false, 00:16:05.010 "zone_append": false, 00:16:05.010 "compare": false, 00:16:05.010 "compare_and_write": false, 00:16:05.010 "abort": true, 00:16:05.010 "seek_hole": false, 00:16:05.010 "seek_data": false, 00:16:05.010 "copy": true, 00:16:05.010 "nvme_iov_md": false 00:16:05.010 }, 00:16:05.010 "memory_domains": [ 00:16:05.010 { 00:16:05.010 "dma_device_id": "system", 00:16:05.010 "dma_device_type": 1 00:16:05.010 }, 00:16:05.010 { 00:16:05.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.010 "dma_device_type": 2 00:16:05.010 } 00:16:05.010 ], 00:16:05.010 "driver_specific": { 00:16:05.010 "passthru": { 00:16:05.010 "name": "pt2", 00:16:05.010 "base_bdev_name": "malloc2" 00:16:05.010 } 00:16:05.010 } 00:16:05.010 }' 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.010 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:05.267 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.267 "name": "pt3", 00:16:05.267 "aliases": [ 00:16:05.267 "00000000-0000-0000-0000-000000000003" 00:16:05.267 ], 00:16:05.267 "product_name": "passthru", 00:16:05.267 "block_size": 512, 00:16:05.267 "num_blocks": 65536, 00:16:05.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.267 "assigned_rate_limits": { 00:16:05.267 "rw_ios_per_sec": 0, 00:16:05.267 "rw_mbytes_per_sec": 0, 00:16:05.267 "r_mbytes_per_sec": 0, 00:16:05.267 "w_mbytes_per_sec": 0 00:16:05.267 }, 00:16:05.267 "claimed": true, 00:16:05.267 "claim_type": "exclusive_write", 00:16:05.267 "zoned": false, 00:16:05.267 "supported_io_types": { 00:16:05.267 "read": true, 00:16:05.267 "write": true, 00:16:05.267 "unmap": true, 00:16:05.267 "flush": true, 00:16:05.267 "reset": true, 00:16:05.267 "nvme_admin": false, 00:16:05.267 "nvme_io": false, 00:16:05.268 "nvme_io_md": false, 00:16:05.268 "write_zeroes": true, 00:16:05.268 "zcopy": true, 00:16:05.268 "get_zone_info": false, 00:16:05.268 "zone_management": false, 00:16:05.268 "zone_append": false, 00:16:05.268 "compare": false, 00:16:05.268 "compare_and_write": false, 00:16:05.268 "abort": true, 00:16:05.268 "seek_hole": false, 00:16:05.268 "seek_data": false, 00:16:05.268 "copy": true, 00:16:05.268 "nvme_iov_md": false 00:16:05.268 }, 00:16:05.268 "memory_domains": [ 00:16:05.268 { 00:16:05.268 "dma_device_id": "system", 00:16:05.268 "dma_device_type": 1 00:16:05.268 }, 00:16:05.268 { 00:16:05.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.268 "dma_device_type": 2 00:16:05.268 } 00:16:05.268 ], 00:16:05.268 "driver_specific": { 00:16:05.268 "passthru": { 00:16:05.268 "name": "pt3", 00:16:05.268 "base_bdev_name": "malloc3" 00:16:05.268 } 00:16:05.268 } 00:16:05.268 }' 00:16:05.268 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.268 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.268 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.268 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.268 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:05.525 06:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.784 "name": "pt4", 00:16:05.784 "aliases": [ 00:16:05.784 "00000000-0000-0000-0000-000000000004" 00:16:05.784 ], 00:16:05.784 "product_name": "passthru", 00:16:05.784 "block_size": 512, 00:16:05.784 "num_blocks": 65536, 00:16:05.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.784 "assigned_rate_limits": { 00:16:05.784 "rw_ios_per_sec": 0, 00:16:05.784 "rw_mbytes_per_sec": 0, 00:16:05.784 "r_mbytes_per_sec": 0, 00:16:05.784 "w_mbytes_per_sec": 0 00:16:05.784 }, 00:16:05.784 "claimed": true, 00:16:05.784 "claim_type": "exclusive_write", 00:16:05.784 "zoned": false, 00:16:05.784 "supported_io_types": { 00:16:05.784 "read": true, 00:16:05.784 "write": true, 00:16:05.784 "unmap": true, 00:16:05.784 "flush": true, 00:16:05.784 "reset": true, 00:16:05.784 "nvme_admin": false, 00:16:05.784 "nvme_io": false, 00:16:05.784 "nvme_io_md": false, 00:16:05.784 "write_zeroes": true, 00:16:05.784 "zcopy": true, 00:16:05.784 "get_zone_info": false, 00:16:05.784 "zone_management": false, 00:16:05.784 "zone_append": false, 00:16:05.784 "compare": false, 00:16:05.784 "compare_and_write": false, 00:16:05.784 "abort": true, 00:16:05.784 "seek_hole": false, 00:16:05.784 "seek_data": false, 00:16:05.784 "copy": true, 00:16:05.784 "nvme_iov_md": false 00:16:05.784 }, 00:16:05.784 "memory_domains": [ 00:16:05.784 { 00:16:05.784 "dma_device_id": "system", 00:16:05.784 "dma_device_type": 1 00:16:05.784 }, 00:16:05.784 { 00:16:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.784 "dma_device_type": 2 00:16:05.784 } 00:16:05.784 ], 00:16:05.784 "driver_specific": { 00:16:05.784 "passthru": { 00:16:05.784 "name": "pt4", 00:16:05.784 "base_bdev_name": "malloc4" 00:16:05.784 } 00:16:05.784 } 00:16:05.784 }' 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.784 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:06.043 [2024-07-23 06:29:18.388963] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ddde669a-48bc-11ef-a06c-59ddad71024c '!=' ddde669a-48bc-11ef-a06c-59ddad71024c ']' 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 60060 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 60060 ']' 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 60060 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 60060 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:06.043 killing process with pid 60060 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60060' 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 60060 00:16:06.043 [2024-07-23 06:29:18.417589] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.043 [2024-07-23 06:29:18.417615] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.043 [2024-07-23 06:29:18.417632] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.043 [2024-07-23 06:29:18.417637] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2df80ac34c80 name raid_bdev1, state offline 00:16:06.043 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 60060 00:16:06.043 [2024-07-23 06:29:18.440753] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.302 06:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:06.302 00:16:06.302 real 0m13.667s 00:16:06.302 user 0m24.405s 00:16:06.302 sys 0m2.124s 00:16:06.302 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.302 06:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.302 ************************************ 00:16:06.302 END TEST raid_superblock_test 00:16:06.302 ************************************ 00:16:06.302 06:29:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:06.302 06:29:18 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:16:06.302 06:29:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:06.302 06:29:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.302 06:29:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.302 ************************************ 00:16:06.302 START TEST raid_read_error_test 00:16:06.302 ************************************ 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6c65Z2lzAg 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60461 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60461 /var/tmp/spdk-raid.sock 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60461 ']' 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.302 06:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.302 [2024-07-23 06:29:18.682283] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:06.302 [2024-07-23 06:29:18.682511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:06.868 EAL: TSC is not safe to use in SMP mode 00:16:06.868 EAL: TSC is not invariant 00:16:06.868 [2024-07-23 06:29:19.205830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.868 [2024-07-23 06:29:19.297093] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:06.868 [2024-07-23 06:29:19.299229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.868 [2024-07-23 06:29:19.299986] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.868 [2024-07-23 06:29:19.300000] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.490 06:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.490 06:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:07.490 06:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:07.490 06:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:07.747 BaseBdev1_malloc 00:16:07.747 06:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:08.005 true 00:16:08.005 06:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:08.262 [2024-07-23 06:29:20.608496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:08.262 [2024-07-23 06:29:20.608565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.262 [2024-07-23 06:29:20.608593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x391e5fc34780 00:16:08.262 [2024-07-23 06:29:20.608602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.263 [2024-07-23 06:29:20.609285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.263 [2024-07-23 06:29:20.609313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.263 BaseBdev1 00:16:08.263 06:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:08.263 06:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:08.520 BaseBdev2_malloc 00:16:08.520 06:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:08.778 true 00:16:08.778 06:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:09.035 [2024-07-23 06:29:21.400513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:09.035 [2024-07-23 06:29:21.400570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.035 [2024-07-23 06:29:21.400599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x391e5fc34c80 00:16:09.035 [2024-07-23 06:29:21.400608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.035 [2024-07-23 06:29:21.401287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.035 [2024-07-23 06:29:21.401314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.035 BaseBdev2 00:16:09.035 06:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:09.035 06:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.293 BaseBdev3_malloc 00:16:09.293 06:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:09.551 true 00:16:09.551 06:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:09.809 [2024-07-23 06:29:22.112535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:09.809 [2024-07-23 06:29:22.112601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.809 [2024-07-23 06:29:22.112631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x391e5fc35180 00:16:09.809 [2024-07-23 06:29:22.112641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.809 [2024-07-23 06:29:22.113309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.809 [2024-07-23 06:29:22.113336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.809 BaseBdev3 00:16:09.809 06:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:09.809 06:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:10.067 BaseBdev4_malloc 00:16:10.067 06:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:10.326 true 00:16:10.326 06:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:10.583 [2024-07-23 06:29:22.872550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:10.583 [2024-07-23 06:29:22.872612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.583 [2024-07-23 06:29:22.872643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x391e5fc35680 00:16:10.583 [2024-07-23 06:29:22.872652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.584 [2024-07-23 06:29:22.873324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.584 [2024-07-23 06:29:22.873353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:10.584 BaseBdev4 00:16:10.584 06:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:10.841 [2024-07-23 06:29:23.132560] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.841 [2024-07-23 06:29:23.133163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.841 [2024-07-23 06:29:23.133189] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.841 [2024-07-23 06:29:23.133204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:10.841 [2024-07-23 06:29:23.133273] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x391e5fc35900 00:16:10.841 [2024-07-23 06:29:23.133280] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:10.841 [2024-07-23 06:29:23.133321] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x391e5fca0e20 00:16:10.841 [2024-07-23 06:29:23.133398] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x391e5fc35900 00:16:10.841 [2024-07-23 06:29:23.133403] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x391e5fc35900 00:16:10.841 [2024-07-23 06:29:23.133431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.841 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.099 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.099 "name": "raid_bdev1", 00:16:11.099 "uuid": "e6949d45-48bc-11ef-a06c-59ddad71024c", 00:16:11.099 "strip_size_kb": 64, 00:16:11.099 "state": "online", 00:16:11.099 "raid_level": "raid0", 00:16:11.099 "superblock": true, 00:16:11.099 "num_base_bdevs": 4, 00:16:11.099 "num_base_bdevs_discovered": 4, 00:16:11.099 "num_base_bdevs_operational": 4, 00:16:11.099 "base_bdevs_list": [ 00:16:11.099 { 00:16:11.099 "name": "BaseBdev1", 00:16:11.099 "uuid": "b1121444-00e4-0e5c-a88f-6acdd50ee3a6", 00:16:11.099 "is_configured": true, 00:16:11.099 "data_offset": 2048, 00:16:11.099 "data_size": 63488 00:16:11.099 }, 00:16:11.099 { 00:16:11.099 "name": "BaseBdev2", 00:16:11.099 "uuid": "ed1fae02-1151-fb5c-9bcf-214d2cdd81ef", 00:16:11.099 "is_configured": true, 00:16:11.099 "data_offset": 2048, 00:16:11.099 "data_size": 63488 00:16:11.099 }, 00:16:11.099 { 00:16:11.099 "name": "BaseBdev3", 00:16:11.099 "uuid": "49cc5237-a1d8-0a5b-bc63-fc7ce16ea479", 00:16:11.099 "is_configured": true, 00:16:11.099 "data_offset": 2048, 00:16:11.099 "data_size": 63488 00:16:11.099 }, 00:16:11.099 { 00:16:11.099 "name": "BaseBdev4", 00:16:11.099 "uuid": "66338433-25b8-d159-88b5-c4305ee4f09f", 00:16:11.099 "is_configured": true, 00:16:11.099 "data_offset": 2048, 00:16:11.099 "data_size": 63488 00:16:11.099 } 00:16:11.099 ] 00:16:11.099 }' 00:16:11.099 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.099 06:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.357 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:11.357 06:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:11.357 [2024-07-23 06:29:23.872786] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x391e5fca0ec0 00:16:12.731 06:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.731 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.989 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.989 "name": "raid_bdev1", 00:16:12.989 "uuid": "e6949d45-48bc-11ef-a06c-59ddad71024c", 00:16:12.989 "strip_size_kb": 64, 00:16:12.989 "state": "online", 00:16:12.989 "raid_level": "raid0", 00:16:12.989 "superblock": true, 00:16:12.989 "num_base_bdevs": 4, 00:16:12.989 "num_base_bdevs_discovered": 4, 00:16:12.989 "num_base_bdevs_operational": 4, 00:16:12.989 "base_bdevs_list": [ 00:16:12.989 { 00:16:12.989 "name": "BaseBdev1", 00:16:12.989 "uuid": "b1121444-00e4-0e5c-a88f-6acdd50ee3a6", 00:16:12.989 "is_configured": true, 00:16:12.989 "data_offset": 2048, 00:16:12.989 "data_size": 63488 00:16:12.989 }, 00:16:12.989 { 00:16:12.989 "name": "BaseBdev2", 00:16:12.989 "uuid": "ed1fae02-1151-fb5c-9bcf-214d2cdd81ef", 00:16:12.989 "is_configured": true, 00:16:12.989 "data_offset": 2048, 00:16:12.989 "data_size": 63488 00:16:12.989 }, 00:16:12.989 { 00:16:12.989 "name": "BaseBdev3", 00:16:12.989 "uuid": "49cc5237-a1d8-0a5b-bc63-fc7ce16ea479", 00:16:12.989 "is_configured": true, 00:16:12.989 "data_offset": 2048, 00:16:12.989 "data_size": 63488 00:16:12.989 }, 00:16:12.989 { 00:16:12.989 "name": "BaseBdev4", 00:16:12.989 "uuid": "66338433-25b8-d159-88b5-c4305ee4f09f", 00:16:12.989 "is_configured": true, 00:16:12.989 "data_offset": 2048, 00:16:12.989 "data_size": 63488 00:16:12.989 } 00:16:12.989 ] 00:16:12.989 }' 00:16:12.989 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.989 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.247 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:13.506 [2024-07-23 06:29:25.891805] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.506 [2024-07-23 06:29:25.891837] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.506 [2024-07-23 06:29:25.892280] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.506 [2024-07-23 06:29:25.892293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.506 [2024-07-23 06:29:25.892302] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.506 [2024-07-23 06:29:25.892307] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x391e5fc35900 name raid_bdev1, state offline 00:16:13.506 0 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60461 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60461 ']' 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60461 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60461 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:16:13.506 killing process with pid 60461 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60461' 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60461 00:16:13.506 [2024-07-23 06:29:25.918768] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.506 06:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60461 00:16:13.506 [2024-07-23 06:29:25.943131] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6c65Z2lzAg 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:16:13.786 00:16:13.786 real 0m7.475s 00:16:13.786 user 0m12.030s 00:16:13.786 sys 0m1.149s 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.786 ************************************ 00:16:13.786 06:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.786 END TEST raid_read_error_test 00:16:13.786 ************************************ 00:16:13.786 06:29:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:13.786 06:29:26 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:13.786 06:29:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:13.786 06:29:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.786 06:29:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.786 ************************************ 00:16:13.786 START TEST raid_write_error_test 00:16:13.786 ************************************ 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ngb754gDHB 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60599 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60599 /var/tmp/spdk-raid.sock 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60599 ']' 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.786 06:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.786 [2024-07-23 06:29:26.215083] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:13.786 [2024-07-23 06:29:26.215274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:14.354 EAL: TSC is not safe to use in SMP mode 00:16:14.354 EAL: TSC is not invariant 00:16:14.354 [2024-07-23 06:29:26.765440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.354 [2024-07-23 06:29:26.855132] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:14.354 [2024-07-23 06:29:26.857518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.354 [2024-07-23 06:29:26.858380] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.354 [2024-07-23 06:29:26.858395] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.921 06:29:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.921 06:29:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:14.921 06:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:14.921 06:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:15.180 BaseBdev1_malloc 00:16:15.180 06:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:15.439 true 00:16:15.439 06:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:15.697 [2024-07-23 06:29:28.055526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:15.697 [2024-07-23 06:29:28.055603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.698 [2024-07-23 06:29:28.055632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18547b234780 00:16:15.698 [2024-07-23 06:29:28.055640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.698 [2024-07-23 06:29:28.056356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.698 [2024-07-23 06:29:28.056390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.698 BaseBdev1 00:16:15.698 06:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:15.698 06:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:15.956 BaseBdev2_malloc 00:16:15.956 06:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:16.215 true 00:16:16.215 06:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:16.474 [2024-07-23 06:29:28.847537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:16.474 [2024-07-23 06:29:28.847593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.474 [2024-07-23 06:29:28.847622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18547b234c80 00:16:16.474 [2024-07-23 06:29:28.847631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.474 [2024-07-23 06:29:28.848320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.474 [2024-07-23 06:29:28.848378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:16.474 BaseBdev2 00:16:16.474 06:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:16.474 06:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:16.732 BaseBdev3_malloc 00:16:16.732 06:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:16.991 true 00:16:16.991 06:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:17.249 [2024-07-23 06:29:29.687556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:17.249 [2024-07-23 06:29:29.687613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.249 [2024-07-23 06:29:29.687640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18547b235180 00:16:17.249 [2024-07-23 06:29:29.687648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.249 [2024-07-23 06:29:29.688314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.249 [2024-07-23 06:29:29.688341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:17.249 BaseBdev3 00:16:17.249 06:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:17.249 06:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:17.508 BaseBdev4_malloc 00:16:17.508 06:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:17.766 true 00:16:17.766 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:18.025 [2024-07-23 06:29:30.451591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:18.025 [2024-07-23 06:29:30.451647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.025 [2024-07-23 06:29:30.451672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18547b235680 00:16:18.025 [2024-07-23 06:29:30.451682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.025 [2024-07-23 06:29:30.452365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.025 [2024-07-23 06:29:30.452391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:18.025 BaseBdev4 00:16:18.025 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:18.284 [2024-07-23 06:29:30.707613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.284 [2024-07-23 06:29:30.708206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.284 [2024-07-23 06:29:30.708242] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.284 [2024-07-23 06:29:30.708257] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.284 [2024-07-23 06:29:30.708324] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x18547b235900 00:16:18.284 [2024-07-23 06:29:30.708331] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:18.284 [2024-07-23 06:29:30.708373] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18547b2a0e20 00:16:18.284 [2024-07-23 06:29:30.708451] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18547b235900 00:16:18.284 [2024-07-23 06:29:30.708457] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18547b235900 00:16:18.284 [2024-07-23 06:29:30.708485] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.284 06:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.559 06:29:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.559 "name": "raid_bdev1", 00:16:18.559 "uuid": "eb1879bf-48bc-11ef-a06c-59ddad71024c", 00:16:18.559 "strip_size_kb": 64, 00:16:18.559 "state": "online", 00:16:18.559 "raid_level": "raid0", 00:16:18.559 "superblock": true, 00:16:18.559 "num_base_bdevs": 4, 00:16:18.559 "num_base_bdevs_discovered": 4, 00:16:18.559 "num_base_bdevs_operational": 4, 00:16:18.559 "base_bdevs_list": [ 00:16:18.559 { 00:16:18.559 "name": "BaseBdev1", 00:16:18.559 "uuid": "594dce64-f0a1-a052-be8e-875c1a3948da", 00:16:18.559 "is_configured": true, 00:16:18.559 "data_offset": 2048, 00:16:18.559 "data_size": 63488 00:16:18.559 }, 00:16:18.559 { 00:16:18.559 "name": "BaseBdev2", 00:16:18.559 "uuid": "a30d8ff1-d118-8c57-9ee6-36e35beb9a96", 00:16:18.559 "is_configured": true, 00:16:18.559 "data_offset": 2048, 00:16:18.559 "data_size": 63488 00:16:18.559 }, 00:16:18.559 { 00:16:18.559 "name": "BaseBdev3", 00:16:18.559 "uuid": "6375f162-1faf-0e50-aabb-b748eb0f9292", 00:16:18.559 "is_configured": true, 00:16:18.559 "data_offset": 2048, 00:16:18.559 "data_size": 63488 00:16:18.559 }, 00:16:18.559 { 00:16:18.559 "name": "BaseBdev4", 00:16:18.559 "uuid": "f3c9b5cb-b092-6659-b961-a3748dc01652", 00:16:18.559 "is_configured": true, 00:16:18.559 "data_offset": 2048, 00:16:18.559 "data_size": 63488 00:16:18.559 } 00:16:18.559 ] 00:16:18.559 }' 00:16:18.559 06:29:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.559 06:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.128 06:29:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:19.128 06:29:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:19.128 [2024-07-23 06:29:31.503872] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18547b2a0ec0 00:16:20.063 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.321 06:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.579 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.579 "name": "raid_bdev1", 00:16:20.579 "uuid": "eb1879bf-48bc-11ef-a06c-59ddad71024c", 00:16:20.579 "strip_size_kb": 64, 00:16:20.579 "state": "online", 00:16:20.579 "raid_level": "raid0", 00:16:20.579 "superblock": true, 00:16:20.579 "num_base_bdevs": 4, 00:16:20.579 "num_base_bdevs_discovered": 4, 00:16:20.579 "num_base_bdevs_operational": 4, 00:16:20.579 "base_bdevs_list": [ 00:16:20.579 { 00:16:20.579 "name": "BaseBdev1", 00:16:20.579 "uuid": "594dce64-f0a1-a052-be8e-875c1a3948da", 00:16:20.579 "is_configured": true, 00:16:20.579 "data_offset": 2048, 00:16:20.579 "data_size": 63488 00:16:20.579 }, 00:16:20.579 { 00:16:20.579 "name": "BaseBdev2", 00:16:20.579 "uuid": "a30d8ff1-d118-8c57-9ee6-36e35beb9a96", 00:16:20.579 "is_configured": true, 00:16:20.579 "data_offset": 2048, 00:16:20.579 "data_size": 63488 00:16:20.579 }, 00:16:20.579 { 00:16:20.579 "name": "BaseBdev3", 00:16:20.579 "uuid": "6375f162-1faf-0e50-aabb-b748eb0f9292", 00:16:20.579 "is_configured": true, 00:16:20.579 "data_offset": 2048, 00:16:20.579 "data_size": 63488 00:16:20.579 }, 00:16:20.579 { 00:16:20.579 "name": "BaseBdev4", 00:16:20.579 "uuid": "f3c9b5cb-b092-6659-b961-a3748dc01652", 00:16:20.579 "is_configured": true, 00:16:20.579 "data_offset": 2048, 00:16:20.579 "data_size": 63488 00:16:20.579 } 00:16:20.579 ] 00:16:20.579 }' 00:16:20.579 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.579 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.838 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:21.112 [2024-07-23 06:29:33.546653] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.112 [2024-07-23 06:29:33.546684] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.112 [2024-07-23 06:29:33.547025] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.112 [2024-07-23 06:29:33.547037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.112 [2024-07-23 06:29:33.547046] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.112 [2024-07-23 06:29:33.547050] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18547b235900 name raid_bdev1, state offline 00:16:21.112 0 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60599 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60599 ']' 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60599 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60599 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:16:21.112 killing process with pid 60599 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60599' 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60599 00:16:21.112 [2024-07-23 06:29:33.574360] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.112 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60599 00:16:21.112 [2024-07-23 06:29:33.598123] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ngb754gDHB 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:16:21.390 00:16:21.390 real 0m7.590s 00:16:21.390 user 0m12.189s 00:16:21.390 sys 0m1.212s 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.390 06:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.390 ************************************ 00:16:21.390 END TEST raid_write_error_test 00:16:21.390 ************************************ 00:16:21.390 06:29:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:21.390 06:29:33 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:21.390 06:29:33 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:21.390 06:29:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:21.390 06:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.390 06:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.390 ************************************ 00:16:21.390 START TEST raid_state_function_test 00:16:21.390 ************************************ 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60735 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60735' 00:16:21.390 Process raid pid: 60735 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60735 /var/tmp/spdk-raid.sock 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60735 ']' 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.390 06:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.390 [2024-07-23 06:29:33.840591] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:21.390 [2024-07-23 06:29:33.840818] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:21.955 EAL: TSC is not safe to use in SMP mode 00:16:21.955 EAL: TSC is not invariant 00:16:21.955 [2024-07-23 06:29:34.393705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.212 [2024-07-23 06:29:34.487406] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:22.212 [2024-07-23 06:29:34.489719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.212 [2024-07-23 06:29:34.490650] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.212 [2024-07-23 06:29:34.490667] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.469 06:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.469 06:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:22.469 06:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:23.035 [2024-07-23 06:29:35.288185] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.035 [2024-07-23 06:29:35.288248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.035 [2024-07-23 06:29:35.288254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.035 [2024-07-23 06:29:35.288270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.035 [2024-07-23 06:29:35.288274] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.035 [2024-07-23 06:29:35.288282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.035 [2024-07-23 06:29:35.288285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.035 [2024-07-23 06:29:35.288292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.035 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.036 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.294 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.294 "name": "Existed_Raid", 00:16:23.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.294 "strip_size_kb": 64, 00:16:23.294 "state": "configuring", 00:16:23.294 "raid_level": "concat", 00:16:23.294 "superblock": false, 00:16:23.294 "num_base_bdevs": 4, 00:16:23.294 "num_base_bdevs_discovered": 0, 00:16:23.294 "num_base_bdevs_operational": 4, 00:16:23.294 "base_bdevs_list": [ 00:16:23.294 { 00:16:23.294 "name": "BaseBdev1", 00:16:23.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.294 "is_configured": false, 00:16:23.294 "data_offset": 0, 00:16:23.294 "data_size": 0 00:16:23.294 }, 00:16:23.294 { 00:16:23.294 "name": "BaseBdev2", 00:16:23.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.294 "is_configured": false, 00:16:23.294 "data_offset": 0, 00:16:23.294 "data_size": 0 00:16:23.294 }, 00:16:23.294 { 00:16:23.294 "name": "BaseBdev3", 00:16:23.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.294 "is_configured": false, 00:16:23.294 "data_offset": 0, 00:16:23.294 "data_size": 0 00:16:23.294 }, 00:16:23.294 { 00:16:23.294 "name": "BaseBdev4", 00:16:23.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.294 "is_configured": false, 00:16:23.294 "data_offset": 0, 00:16:23.294 "data_size": 0 00:16:23.294 } 00:16:23.294 ] 00:16:23.294 }' 00:16:23.294 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.294 06:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.553 06:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:23.811 [2024-07-23 06:29:36.208197] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.811 [2024-07-23 06:29:36.208224] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe270e034500 name Existed_Raid, state configuring 00:16:23.811 06:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:24.070 [2024-07-23 06:29:36.496225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.070 [2024-07-23 06:29:36.496327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.070 [2024-07-23 06:29:36.496332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.070 [2024-07-23 06:29:36.496341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.070 [2024-07-23 06:29:36.496345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.070 [2024-07-23 06:29:36.496352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.070 [2024-07-23 06:29:36.496355] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.070 [2024-07-23 06:29:36.496363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.070 06:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.328 [2024-07-23 06:29:36.737409] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.328 BaseBdev1 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:24.328 06:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.587 06:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.846 [ 00:16:24.846 { 00:16:24.846 "name": "BaseBdev1", 00:16:24.846 "aliases": [ 00:16:24.846 "eeb06095-48bc-11ef-a06c-59ddad71024c" 00:16:24.846 ], 00:16:24.846 "product_name": "Malloc disk", 00:16:24.846 "block_size": 512, 00:16:24.846 "num_blocks": 65536, 00:16:24.846 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:24.846 "assigned_rate_limits": { 00:16:24.846 "rw_ios_per_sec": 0, 00:16:24.846 "rw_mbytes_per_sec": 0, 00:16:24.846 "r_mbytes_per_sec": 0, 00:16:24.846 "w_mbytes_per_sec": 0 00:16:24.846 }, 00:16:24.846 "claimed": true, 00:16:24.846 "claim_type": "exclusive_write", 00:16:24.846 "zoned": false, 00:16:24.846 "supported_io_types": { 00:16:24.846 "read": true, 00:16:24.846 "write": true, 00:16:24.846 "unmap": true, 00:16:24.846 "flush": true, 00:16:24.846 "reset": true, 00:16:24.846 "nvme_admin": false, 00:16:24.846 "nvme_io": false, 00:16:24.846 "nvme_io_md": false, 00:16:24.846 "write_zeroes": true, 00:16:24.846 "zcopy": true, 00:16:24.846 "get_zone_info": false, 00:16:24.846 "zone_management": false, 00:16:24.846 "zone_append": false, 00:16:24.846 "compare": false, 00:16:24.846 "compare_and_write": false, 00:16:24.846 "abort": true, 00:16:24.846 "seek_hole": false, 00:16:24.846 "seek_data": false, 00:16:24.846 "copy": true, 00:16:24.846 "nvme_iov_md": false 00:16:24.846 }, 00:16:24.846 "memory_domains": [ 00:16:24.846 { 00:16:24.846 "dma_device_id": "system", 00:16:24.846 "dma_device_type": 1 00:16:24.846 }, 00:16:24.846 { 00:16:24.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.846 "dma_device_type": 2 00:16:24.846 } 00:16:24.846 ], 00:16:24.846 "driver_specific": {} 00:16:24.846 } 00:16:24.846 ] 00:16:24.846 06:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.847 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.106 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.106 "name": "Existed_Raid", 00:16:25.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.106 "strip_size_kb": 64, 00:16:25.106 "state": "configuring", 00:16:25.106 "raid_level": "concat", 00:16:25.106 "superblock": false, 00:16:25.106 "num_base_bdevs": 4, 00:16:25.106 "num_base_bdevs_discovered": 1, 00:16:25.106 "num_base_bdevs_operational": 4, 00:16:25.106 "base_bdevs_list": [ 00:16:25.106 { 00:16:25.106 "name": "BaseBdev1", 00:16:25.106 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:25.106 "is_configured": true, 00:16:25.106 "data_offset": 0, 00:16:25.106 "data_size": 65536 00:16:25.106 }, 00:16:25.106 { 00:16:25.106 "name": "BaseBdev2", 00:16:25.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.106 "is_configured": false, 00:16:25.106 "data_offset": 0, 00:16:25.106 "data_size": 0 00:16:25.106 }, 00:16:25.106 { 00:16:25.106 "name": "BaseBdev3", 00:16:25.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.106 "is_configured": false, 00:16:25.106 "data_offset": 0, 00:16:25.106 "data_size": 0 00:16:25.106 }, 00:16:25.106 { 00:16:25.106 "name": "BaseBdev4", 00:16:25.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.106 "is_configured": false, 00:16:25.106 "data_offset": 0, 00:16:25.106 "data_size": 0 00:16:25.106 } 00:16:25.106 ] 00:16:25.106 }' 00:16:25.106 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.106 06:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.404 06:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.970 [2024-07-23 06:29:38.216392] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.970 [2024-07-23 06:29:38.216423] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe270e034500 name Existed_Raid, state configuring 00:16:25.970 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:26.228 [2024-07-23 06:29:38.512421] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.228 [2024-07-23 06:29:38.513232] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.228 [2024-07-23 06:29:38.513270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.228 [2024-07-23 06:29:38.513275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.228 [2024-07-23 06:29:38.513284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.228 [2024-07-23 06:29:38.513288] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:26.228 [2024-07-23 06:29:38.513295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.228 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.486 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.486 "name": "Existed_Raid", 00:16:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.486 "strip_size_kb": 64, 00:16:26.486 "state": "configuring", 00:16:26.486 "raid_level": "concat", 00:16:26.486 "superblock": false, 00:16:26.486 "num_base_bdevs": 4, 00:16:26.486 "num_base_bdevs_discovered": 1, 00:16:26.486 "num_base_bdevs_operational": 4, 00:16:26.486 "base_bdevs_list": [ 00:16:26.486 { 00:16:26.486 "name": "BaseBdev1", 00:16:26.486 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:26.486 "is_configured": true, 00:16:26.486 "data_offset": 0, 00:16:26.486 "data_size": 65536 00:16:26.486 }, 00:16:26.486 { 00:16:26.486 "name": "BaseBdev2", 00:16:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.486 "is_configured": false, 00:16:26.486 "data_offset": 0, 00:16:26.486 "data_size": 0 00:16:26.486 }, 00:16:26.486 { 00:16:26.486 "name": "BaseBdev3", 00:16:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.486 "is_configured": false, 00:16:26.486 "data_offset": 0, 00:16:26.486 "data_size": 0 00:16:26.486 }, 00:16:26.486 { 00:16:26.486 "name": "BaseBdev4", 00:16:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.486 "is_configured": false, 00:16:26.486 "data_offset": 0, 00:16:26.486 "data_size": 0 00:16:26.486 } 00:16:26.486 ] 00:16:26.486 }' 00:16:26.487 06:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.487 06:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.744 06:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.002 [2024-07-23 06:29:39.468572] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.002 BaseBdev2 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.002 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.259 06:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.826 [ 00:16:27.826 { 00:16:27.826 "name": "BaseBdev2", 00:16:27.826 "aliases": [ 00:16:27.826 "f05145da-48bc-11ef-a06c-59ddad71024c" 00:16:27.826 ], 00:16:27.826 "product_name": "Malloc disk", 00:16:27.826 "block_size": 512, 00:16:27.826 "num_blocks": 65536, 00:16:27.826 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:27.826 "assigned_rate_limits": { 00:16:27.826 "rw_ios_per_sec": 0, 00:16:27.826 "rw_mbytes_per_sec": 0, 00:16:27.826 "r_mbytes_per_sec": 0, 00:16:27.826 "w_mbytes_per_sec": 0 00:16:27.826 }, 00:16:27.826 "claimed": true, 00:16:27.826 "claim_type": "exclusive_write", 00:16:27.826 "zoned": false, 00:16:27.826 "supported_io_types": { 00:16:27.826 "read": true, 00:16:27.826 "write": true, 00:16:27.826 "unmap": true, 00:16:27.826 "flush": true, 00:16:27.826 "reset": true, 00:16:27.826 "nvme_admin": false, 00:16:27.826 "nvme_io": false, 00:16:27.826 "nvme_io_md": false, 00:16:27.826 "write_zeroes": true, 00:16:27.826 "zcopy": true, 00:16:27.826 "get_zone_info": false, 00:16:27.826 "zone_management": false, 00:16:27.826 "zone_append": false, 00:16:27.826 "compare": false, 00:16:27.826 "compare_and_write": false, 00:16:27.826 "abort": true, 00:16:27.826 "seek_hole": false, 00:16:27.826 "seek_data": false, 00:16:27.826 "copy": true, 00:16:27.826 "nvme_iov_md": false 00:16:27.826 }, 00:16:27.826 "memory_domains": [ 00:16:27.826 { 00:16:27.826 "dma_device_id": "system", 00:16:27.826 "dma_device_type": 1 00:16:27.826 }, 00:16:27.826 { 00:16:27.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.826 "dma_device_type": 2 00:16:27.826 } 00:16:27.826 ], 00:16:27.826 "driver_specific": {} 00:16:27.826 } 00:16:27.826 ] 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.826 "name": "Existed_Raid", 00:16:27.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.826 "strip_size_kb": 64, 00:16:27.826 "state": "configuring", 00:16:27.826 "raid_level": "concat", 00:16:27.826 "superblock": false, 00:16:27.826 "num_base_bdevs": 4, 00:16:27.826 "num_base_bdevs_discovered": 2, 00:16:27.826 "num_base_bdevs_operational": 4, 00:16:27.826 "base_bdevs_list": [ 00:16:27.826 { 00:16:27.826 "name": "BaseBdev1", 00:16:27.826 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:27.826 "is_configured": true, 00:16:27.826 "data_offset": 0, 00:16:27.826 "data_size": 65536 00:16:27.826 }, 00:16:27.826 { 00:16:27.826 "name": "BaseBdev2", 00:16:27.826 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:27.826 "is_configured": true, 00:16:27.826 "data_offset": 0, 00:16:27.826 "data_size": 65536 00:16:27.826 }, 00:16:27.826 { 00:16:27.826 "name": "BaseBdev3", 00:16:27.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.826 "is_configured": false, 00:16:27.826 "data_offset": 0, 00:16:27.826 "data_size": 0 00:16:27.826 }, 00:16:27.826 { 00:16:27.826 "name": "BaseBdev4", 00:16:27.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.826 "is_configured": false, 00:16:27.826 "data_offset": 0, 00:16:27.826 "data_size": 0 00:16:27.826 } 00:16:27.826 ] 00:16:27.826 }' 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.826 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.393 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.651 [2024-07-23 06:29:40.964599] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.651 BaseBdev3 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:28.651 06:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.910 06:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.170 [ 00:16:29.170 { 00:16:29.170 "name": "BaseBdev3", 00:16:29.170 "aliases": [ 00:16:29.170 "f1358d00-48bc-11ef-a06c-59ddad71024c" 00:16:29.170 ], 00:16:29.170 "product_name": "Malloc disk", 00:16:29.170 "block_size": 512, 00:16:29.170 "num_blocks": 65536, 00:16:29.170 "uuid": "f1358d00-48bc-11ef-a06c-59ddad71024c", 00:16:29.170 "assigned_rate_limits": { 00:16:29.170 "rw_ios_per_sec": 0, 00:16:29.170 "rw_mbytes_per_sec": 0, 00:16:29.170 "r_mbytes_per_sec": 0, 00:16:29.170 "w_mbytes_per_sec": 0 00:16:29.170 }, 00:16:29.170 "claimed": true, 00:16:29.170 "claim_type": "exclusive_write", 00:16:29.170 "zoned": false, 00:16:29.170 "supported_io_types": { 00:16:29.170 "read": true, 00:16:29.170 "write": true, 00:16:29.170 "unmap": true, 00:16:29.170 "flush": true, 00:16:29.170 "reset": true, 00:16:29.170 "nvme_admin": false, 00:16:29.170 "nvme_io": false, 00:16:29.170 "nvme_io_md": false, 00:16:29.170 "write_zeroes": true, 00:16:29.170 "zcopy": true, 00:16:29.170 "get_zone_info": false, 00:16:29.170 "zone_management": false, 00:16:29.170 "zone_append": false, 00:16:29.170 "compare": false, 00:16:29.170 "compare_and_write": false, 00:16:29.170 "abort": true, 00:16:29.170 "seek_hole": false, 00:16:29.170 "seek_data": false, 00:16:29.170 "copy": true, 00:16:29.170 "nvme_iov_md": false 00:16:29.170 }, 00:16:29.170 "memory_domains": [ 00:16:29.170 { 00:16:29.170 "dma_device_id": "system", 00:16:29.170 "dma_device_type": 1 00:16:29.170 }, 00:16:29.170 { 00:16:29.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.170 "dma_device_type": 2 00:16:29.170 } 00:16:29.170 ], 00:16:29.170 "driver_specific": {} 00:16:29.170 } 00:16:29.170 ] 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.170 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.428 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.428 "name": "Existed_Raid", 00:16:29.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.428 "strip_size_kb": 64, 00:16:29.428 "state": "configuring", 00:16:29.428 "raid_level": "concat", 00:16:29.428 "superblock": false, 00:16:29.428 "num_base_bdevs": 4, 00:16:29.428 "num_base_bdevs_discovered": 3, 00:16:29.428 "num_base_bdevs_operational": 4, 00:16:29.428 "base_bdevs_list": [ 00:16:29.428 { 00:16:29.428 "name": "BaseBdev1", 00:16:29.428 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:29.428 "is_configured": true, 00:16:29.428 "data_offset": 0, 00:16:29.428 "data_size": 65536 00:16:29.428 }, 00:16:29.428 { 00:16:29.428 "name": "BaseBdev2", 00:16:29.428 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:29.428 "is_configured": true, 00:16:29.428 "data_offset": 0, 00:16:29.428 "data_size": 65536 00:16:29.428 }, 00:16:29.428 { 00:16:29.428 "name": "BaseBdev3", 00:16:29.428 "uuid": "f1358d00-48bc-11ef-a06c-59ddad71024c", 00:16:29.428 "is_configured": true, 00:16:29.428 "data_offset": 0, 00:16:29.428 "data_size": 65536 00:16:29.428 }, 00:16:29.428 { 00:16:29.428 "name": "BaseBdev4", 00:16:29.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.428 "is_configured": false, 00:16:29.428 "data_offset": 0, 00:16:29.428 "data_size": 0 00:16:29.428 } 00:16:29.428 ] 00:16:29.428 }' 00:16:29.428 06:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.428 06:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.994 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:29.994 [2024-07-23 06:29:42.500642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.995 [2024-07-23 06:29:42.500672] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe270e034a00 00:16:29.995 [2024-07-23 06:29:42.500677] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:29.995 [2024-07-23 06:29:42.500709] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe270e097e20 00:16:29.995 [2024-07-23 06:29:42.500800] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe270e034a00 00:16:29.995 [2024-07-23 06:29:42.500805] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe270e034a00 00:16:29.995 [2024-07-23 06:29:42.500839] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.995 BaseBdev4 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.995 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.253 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:30.511 [ 00:16:30.511 { 00:16:30.511 "name": "BaseBdev4", 00:16:30.511 "aliases": [ 00:16:30.511 "f21fee7a-48bc-11ef-a06c-59ddad71024c" 00:16:30.511 ], 00:16:30.512 "product_name": "Malloc disk", 00:16:30.512 "block_size": 512, 00:16:30.512 "num_blocks": 65536, 00:16:30.512 "uuid": "f21fee7a-48bc-11ef-a06c-59ddad71024c", 00:16:30.512 "assigned_rate_limits": { 00:16:30.512 "rw_ios_per_sec": 0, 00:16:30.512 "rw_mbytes_per_sec": 0, 00:16:30.512 "r_mbytes_per_sec": 0, 00:16:30.512 "w_mbytes_per_sec": 0 00:16:30.512 }, 00:16:30.512 "claimed": true, 00:16:30.512 "claim_type": "exclusive_write", 00:16:30.512 "zoned": false, 00:16:30.512 "supported_io_types": { 00:16:30.512 "read": true, 00:16:30.512 "write": true, 00:16:30.512 "unmap": true, 00:16:30.512 "flush": true, 00:16:30.512 "reset": true, 00:16:30.512 "nvme_admin": false, 00:16:30.512 "nvme_io": false, 00:16:30.512 "nvme_io_md": false, 00:16:30.512 "write_zeroes": true, 00:16:30.512 "zcopy": true, 00:16:30.512 "get_zone_info": false, 00:16:30.512 "zone_management": false, 00:16:30.512 "zone_append": false, 00:16:30.512 "compare": false, 00:16:30.512 "compare_and_write": false, 00:16:30.512 "abort": true, 00:16:30.512 "seek_hole": false, 00:16:30.512 "seek_data": false, 00:16:30.512 "copy": true, 00:16:30.512 "nvme_iov_md": false 00:16:30.512 }, 00:16:30.512 "memory_domains": [ 00:16:30.512 { 00:16:30.512 "dma_device_id": "system", 00:16:30.512 "dma_device_type": 1 00:16:30.512 }, 00:16:30.512 { 00:16:30.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.512 "dma_device_type": 2 00:16:30.512 } 00:16:30.512 ], 00:16:30.512 "driver_specific": {} 00:16:30.512 } 00:16:30.512 ] 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.512 06:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.512 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.512 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.770 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.770 "name": "Existed_Raid", 00:16:30.770 "uuid": "f21ff532-48bc-11ef-a06c-59ddad71024c", 00:16:30.770 "strip_size_kb": 64, 00:16:30.770 "state": "online", 00:16:30.770 "raid_level": "concat", 00:16:30.770 "superblock": false, 00:16:30.770 "num_base_bdevs": 4, 00:16:30.770 "num_base_bdevs_discovered": 4, 00:16:30.770 "num_base_bdevs_operational": 4, 00:16:30.770 "base_bdevs_list": [ 00:16:30.770 { 00:16:30.770 "name": "BaseBdev1", 00:16:30.770 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:30.770 "is_configured": true, 00:16:30.770 "data_offset": 0, 00:16:30.770 "data_size": 65536 00:16:30.770 }, 00:16:30.770 { 00:16:30.770 "name": "BaseBdev2", 00:16:30.770 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:30.770 "is_configured": true, 00:16:30.770 "data_offset": 0, 00:16:30.770 "data_size": 65536 00:16:30.770 }, 00:16:30.770 { 00:16:30.770 "name": "BaseBdev3", 00:16:30.770 "uuid": "f1358d00-48bc-11ef-a06c-59ddad71024c", 00:16:30.770 "is_configured": true, 00:16:30.770 "data_offset": 0, 00:16:30.770 "data_size": 65536 00:16:30.770 }, 00:16:30.770 { 00:16:30.770 "name": "BaseBdev4", 00:16:30.770 "uuid": "f21fee7a-48bc-11ef-a06c-59ddad71024c", 00:16:30.770 "is_configured": true, 00:16:30.770 "data_offset": 0, 00:16:30.770 "data_size": 65536 00:16:30.770 } 00:16:30.770 ] 00:16:30.771 }' 00:16:30.771 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.771 06:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:31.337 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:31.595 [2024-07-23 06:29:43.876585] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.595 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:31.595 "name": "Existed_Raid", 00:16:31.595 "aliases": [ 00:16:31.595 "f21ff532-48bc-11ef-a06c-59ddad71024c" 00:16:31.595 ], 00:16:31.595 "product_name": "Raid Volume", 00:16:31.595 "block_size": 512, 00:16:31.595 "num_blocks": 262144, 00:16:31.595 "uuid": "f21ff532-48bc-11ef-a06c-59ddad71024c", 00:16:31.595 "assigned_rate_limits": { 00:16:31.595 "rw_ios_per_sec": 0, 00:16:31.595 "rw_mbytes_per_sec": 0, 00:16:31.595 "r_mbytes_per_sec": 0, 00:16:31.595 "w_mbytes_per_sec": 0 00:16:31.595 }, 00:16:31.595 "claimed": false, 00:16:31.595 "zoned": false, 00:16:31.595 "supported_io_types": { 00:16:31.595 "read": true, 00:16:31.595 "write": true, 00:16:31.595 "unmap": true, 00:16:31.595 "flush": true, 00:16:31.595 "reset": true, 00:16:31.595 "nvme_admin": false, 00:16:31.595 "nvme_io": false, 00:16:31.595 "nvme_io_md": false, 00:16:31.595 "write_zeroes": true, 00:16:31.595 "zcopy": false, 00:16:31.595 "get_zone_info": false, 00:16:31.595 "zone_management": false, 00:16:31.595 "zone_append": false, 00:16:31.595 "compare": false, 00:16:31.595 "compare_and_write": false, 00:16:31.595 "abort": false, 00:16:31.595 "seek_hole": false, 00:16:31.595 "seek_data": false, 00:16:31.595 "copy": false, 00:16:31.595 "nvme_iov_md": false 00:16:31.595 }, 00:16:31.595 "memory_domains": [ 00:16:31.595 { 00:16:31.595 "dma_device_id": "system", 00:16:31.595 "dma_device_type": 1 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.595 "dma_device_type": 2 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "system", 00:16:31.595 "dma_device_type": 1 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.595 "dma_device_type": 2 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "system", 00:16:31.595 "dma_device_type": 1 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.595 "dma_device_type": 2 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "system", 00:16:31.595 "dma_device_type": 1 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.595 "dma_device_type": 2 00:16:31.595 } 00:16:31.595 ], 00:16:31.595 "driver_specific": { 00:16:31.595 "raid": { 00:16:31.595 "uuid": "f21ff532-48bc-11ef-a06c-59ddad71024c", 00:16:31.595 "strip_size_kb": 64, 00:16:31.595 "state": "online", 00:16:31.595 "raid_level": "concat", 00:16:31.595 "superblock": false, 00:16:31.595 "num_base_bdevs": 4, 00:16:31.595 "num_base_bdevs_discovered": 4, 00:16:31.595 "num_base_bdevs_operational": 4, 00:16:31.595 "base_bdevs_list": [ 00:16:31.595 { 00:16:31.595 "name": "BaseBdev1", 00:16:31.595 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:31.595 "is_configured": true, 00:16:31.595 "data_offset": 0, 00:16:31.595 "data_size": 65536 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "name": "BaseBdev2", 00:16:31.595 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:31.595 "is_configured": true, 00:16:31.595 "data_offset": 0, 00:16:31.595 "data_size": 65536 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "name": "BaseBdev3", 00:16:31.595 "uuid": "f1358d00-48bc-11ef-a06c-59ddad71024c", 00:16:31.595 "is_configured": true, 00:16:31.595 "data_offset": 0, 00:16:31.595 "data_size": 65536 00:16:31.595 }, 00:16:31.595 { 00:16:31.595 "name": "BaseBdev4", 00:16:31.595 "uuid": "f21fee7a-48bc-11ef-a06c-59ddad71024c", 00:16:31.595 "is_configured": true, 00:16:31.595 "data_offset": 0, 00:16:31.595 "data_size": 65536 00:16:31.595 } 00:16:31.596 ] 00:16:31.596 } 00:16:31.596 } 00:16:31.596 }' 00:16:31.596 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.596 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:31.596 BaseBdev2 00:16:31.596 BaseBdev3 00:16:31.596 BaseBdev4' 00:16:31.596 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.596 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:31.596 06:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.854 "name": "BaseBdev1", 00:16:31.854 "aliases": [ 00:16:31.854 "eeb06095-48bc-11ef-a06c-59ddad71024c" 00:16:31.854 ], 00:16:31.854 "product_name": "Malloc disk", 00:16:31.854 "block_size": 512, 00:16:31.854 "num_blocks": 65536, 00:16:31.854 "uuid": "eeb06095-48bc-11ef-a06c-59ddad71024c", 00:16:31.854 "assigned_rate_limits": { 00:16:31.854 "rw_ios_per_sec": 0, 00:16:31.854 "rw_mbytes_per_sec": 0, 00:16:31.854 "r_mbytes_per_sec": 0, 00:16:31.854 "w_mbytes_per_sec": 0 00:16:31.854 }, 00:16:31.854 "claimed": true, 00:16:31.854 "claim_type": "exclusive_write", 00:16:31.854 "zoned": false, 00:16:31.854 "supported_io_types": { 00:16:31.854 "read": true, 00:16:31.854 "write": true, 00:16:31.854 "unmap": true, 00:16:31.854 "flush": true, 00:16:31.854 "reset": true, 00:16:31.854 "nvme_admin": false, 00:16:31.854 "nvme_io": false, 00:16:31.854 "nvme_io_md": false, 00:16:31.854 "write_zeroes": true, 00:16:31.854 "zcopy": true, 00:16:31.854 "get_zone_info": false, 00:16:31.854 "zone_management": false, 00:16:31.854 "zone_append": false, 00:16:31.854 "compare": false, 00:16:31.854 "compare_and_write": false, 00:16:31.854 "abort": true, 00:16:31.854 "seek_hole": false, 00:16:31.854 "seek_data": false, 00:16:31.854 "copy": true, 00:16:31.854 "nvme_iov_md": false 00:16:31.854 }, 00:16:31.854 "memory_domains": [ 00:16:31.854 { 00:16:31.854 "dma_device_id": "system", 00:16:31.854 "dma_device_type": 1 00:16:31.854 }, 00:16:31.854 { 00:16:31.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.854 "dma_device_type": 2 00:16:31.854 } 00:16:31.854 ], 00:16:31.854 "driver_specific": {} 00:16:31.854 }' 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.854 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.112 "name": "BaseBdev2", 00:16:32.112 "aliases": [ 00:16:32.112 "f05145da-48bc-11ef-a06c-59ddad71024c" 00:16:32.112 ], 00:16:32.112 "product_name": "Malloc disk", 00:16:32.112 "block_size": 512, 00:16:32.112 "num_blocks": 65536, 00:16:32.112 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:32.112 "assigned_rate_limits": { 00:16:32.112 "rw_ios_per_sec": 0, 00:16:32.112 "rw_mbytes_per_sec": 0, 00:16:32.112 "r_mbytes_per_sec": 0, 00:16:32.112 "w_mbytes_per_sec": 0 00:16:32.112 }, 00:16:32.112 "claimed": true, 00:16:32.112 "claim_type": "exclusive_write", 00:16:32.112 "zoned": false, 00:16:32.112 "supported_io_types": { 00:16:32.112 "read": true, 00:16:32.112 "write": true, 00:16:32.112 "unmap": true, 00:16:32.112 "flush": true, 00:16:32.112 "reset": true, 00:16:32.112 "nvme_admin": false, 00:16:32.112 "nvme_io": false, 00:16:32.112 "nvme_io_md": false, 00:16:32.112 "write_zeroes": true, 00:16:32.112 "zcopy": true, 00:16:32.112 "get_zone_info": false, 00:16:32.112 "zone_management": false, 00:16:32.112 "zone_append": false, 00:16:32.112 "compare": false, 00:16:32.112 "compare_and_write": false, 00:16:32.112 "abort": true, 00:16:32.112 "seek_hole": false, 00:16:32.112 "seek_data": false, 00:16:32.112 "copy": true, 00:16:32.112 "nvme_iov_md": false 00:16:32.112 }, 00:16:32.112 "memory_domains": [ 00:16:32.112 { 00:16:32.112 "dma_device_id": "system", 00:16:32.112 "dma_device_type": 1 00:16:32.112 }, 00:16:32.112 { 00:16:32.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.112 "dma_device_type": 2 00:16:32.112 } 00:16:32.112 ], 00:16:32.112 "driver_specific": {} 00:16:32.112 }' 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:32.112 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.403 "name": "BaseBdev3", 00:16:32.403 "aliases": [ 00:16:32.403 "f1358d00-48bc-11ef-a06c-59ddad71024c" 00:16:32.403 ], 00:16:32.403 "product_name": "Malloc disk", 00:16:32.403 "block_size": 512, 00:16:32.403 "num_blocks": 65536, 00:16:32.403 "uuid": "f1358d00-48bc-11ef-a06c-59ddad71024c", 00:16:32.403 "assigned_rate_limits": { 00:16:32.403 "rw_ios_per_sec": 0, 00:16:32.403 "rw_mbytes_per_sec": 0, 00:16:32.403 "r_mbytes_per_sec": 0, 00:16:32.403 "w_mbytes_per_sec": 0 00:16:32.403 }, 00:16:32.403 "claimed": true, 00:16:32.403 "claim_type": "exclusive_write", 00:16:32.403 "zoned": false, 00:16:32.403 "supported_io_types": { 00:16:32.403 "read": true, 00:16:32.403 "write": true, 00:16:32.403 "unmap": true, 00:16:32.403 "flush": true, 00:16:32.403 "reset": true, 00:16:32.403 "nvme_admin": false, 00:16:32.403 "nvme_io": false, 00:16:32.403 "nvme_io_md": false, 00:16:32.403 "write_zeroes": true, 00:16:32.403 "zcopy": true, 00:16:32.403 "get_zone_info": false, 00:16:32.403 "zone_management": false, 00:16:32.403 "zone_append": false, 00:16:32.403 "compare": false, 00:16:32.403 "compare_and_write": false, 00:16:32.403 "abort": true, 00:16:32.403 "seek_hole": false, 00:16:32.403 "seek_data": false, 00:16:32.403 "copy": true, 00:16:32.403 "nvme_iov_md": false 00:16:32.403 }, 00:16:32.403 "memory_domains": [ 00:16:32.403 { 00:16:32.403 "dma_device_id": "system", 00:16:32.403 "dma_device_type": 1 00:16:32.403 }, 00:16:32.403 { 00:16:32.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.403 "dma_device_type": 2 00:16:32.403 } 00:16:32.403 ], 00:16:32.403 "driver_specific": {} 00:16:32.403 }' 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:32.403 06:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.662 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.662 "name": "BaseBdev4", 00:16:32.662 "aliases": [ 00:16:32.662 "f21fee7a-48bc-11ef-a06c-59ddad71024c" 00:16:32.662 ], 00:16:32.662 "product_name": "Malloc disk", 00:16:32.662 "block_size": 512, 00:16:32.662 "num_blocks": 65536, 00:16:32.662 "uuid": "f21fee7a-48bc-11ef-a06c-59ddad71024c", 00:16:32.662 "assigned_rate_limits": { 00:16:32.662 "rw_ios_per_sec": 0, 00:16:32.662 "rw_mbytes_per_sec": 0, 00:16:32.662 "r_mbytes_per_sec": 0, 00:16:32.662 "w_mbytes_per_sec": 0 00:16:32.662 }, 00:16:32.662 "claimed": true, 00:16:32.662 "claim_type": "exclusive_write", 00:16:32.662 "zoned": false, 00:16:32.662 "supported_io_types": { 00:16:32.662 "read": true, 00:16:32.662 "write": true, 00:16:32.662 "unmap": true, 00:16:32.662 "flush": true, 00:16:32.662 "reset": true, 00:16:32.662 "nvme_admin": false, 00:16:32.662 "nvme_io": false, 00:16:32.662 "nvme_io_md": false, 00:16:32.662 "write_zeroes": true, 00:16:32.662 "zcopy": true, 00:16:32.662 "get_zone_info": false, 00:16:32.662 "zone_management": false, 00:16:32.662 "zone_append": false, 00:16:32.662 "compare": false, 00:16:32.662 "compare_and_write": false, 00:16:32.662 "abort": true, 00:16:32.662 "seek_hole": false, 00:16:32.662 "seek_data": false, 00:16:32.662 "copy": true, 00:16:32.662 "nvme_iov_md": false 00:16:32.662 }, 00:16:32.662 "memory_domains": [ 00:16:32.662 { 00:16:32.662 "dma_device_id": "system", 00:16:32.662 "dma_device_type": 1 00:16:32.662 }, 00:16:32.662 { 00:16:32.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.662 "dma_device_type": 2 00:16:32.662 } 00:16:32.662 ], 00:16:32.662 "driver_specific": {} 00:16:32.662 }' 00:16:32.662 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.662 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.662 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.662 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.920 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:33.179 [2024-07-23 06:29:45.472606] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.179 [2024-07-23 06:29:45.472633] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.179 [2024-07-23 06:29:45.472648] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.179 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.437 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.437 "name": "Existed_Raid", 00:16:33.437 "uuid": "f21ff532-48bc-11ef-a06c-59ddad71024c", 00:16:33.437 "strip_size_kb": 64, 00:16:33.437 "state": "offline", 00:16:33.437 "raid_level": "concat", 00:16:33.437 "superblock": false, 00:16:33.437 "num_base_bdevs": 4, 00:16:33.437 "num_base_bdevs_discovered": 3, 00:16:33.437 "num_base_bdevs_operational": 3, 00:16:33.437 "base_bdevs_list": [ 00:16:33.437 { 00:16:33.437 "name": null, 00:16:33.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.437 "is_configured": false, 00:16:33.437 "data_offset": 0, 00:16:33.437 "data_size": 65536 00:16:33.437 }, 00:16:33.437 { 00:16:33.437 "name": "BaseBdev2", 00:16:33.437 "uuid": "f05145da-48bc-11ef-a06c-59ddad71024c", 00:16:33.437 "is_configured": true, 00:16:33.437 "data_offset": 0, 00:16:33.437 "data_size": 65536 00:16:33.437 }, 00:16:33.437 { 00:16:33.437 "name": "BaseBdev3", 00:16:33.437 "uuid": "f1358d00-48bc-11ef-a06c-59ddad71024c", 00:16:33.437 "is_configured": true, 00:16:33.437 "data_offset": 0, 00:16:33.437 "data_size": 65536 00:16:33.437 }, 00:16:33.437 { 00:16:33.437 "name": "BaseBdev4", 00:16:33.437 "uuid": "f21fee7a-48bc-11ef-a06c-59ddad71024c", 00:16:33.437 "is_configured": true, 00:16:33.437 "data_offset": 0, 00:16:33.437 "data_size": 65536 00:16:33.437 } 00:16:33.437 ] 00:16:33.437 }' 00:16:33.437 06:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.437 06:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.696 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:33.696 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.696 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.696 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:33.954 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:33.954 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.954 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:34.212 [2024-07-23 06:29:46.574896] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.212 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.212 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.212 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.212 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.470 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.470 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.470 06:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:34.729 [2024-07-23 06:29:47.101375] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.729 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.729 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.729 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.729 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.988 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.988 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.988 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:35.246 [2024-07-23 06:29:47.595466] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:35.246 [2024-07-23 06:29:47.595514] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe270e034a00 name Existed_Raid, state offline 00:16:35.246 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:35.246 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:35.246 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.246 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.504 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:35.504 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:35.504 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:35.504 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:35.504 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.504 06:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.762 BaseBdev2 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.762 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.020 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.278 [ 00:16:36.278 { 00:16:36.278 "name": "BaseBdev2", 00:16:36.278 "aliases": [ 00:16:36.278 "f56eaa62-48bc-11ef-a06c-59ddad71024c" 00:16:36.278 ], 00:16:36.278 "product_name": "Malloc disk", 00:16:36.278 "block_size": 512, 00:16:36.278 "num_blocks": 65536, 00:16:36.278 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:36.278 "assigned_rate_limits": { 00:16:36.278 "rw_ios_per_sec": 0, 00:16:36.278 "rw_mbytes_per_sec": 0, 00:16:36.278 "r_mbytes_per_sec": 0, 00:16:36.278 "w_mbytes_per_sec": 0 00:16:36.278 }, 00:16:36.278 "claimed": false, 00:16:36.278 "zoned": false, 00:16:36.278 "supported_io_types": { 00:16:36.278 "read": true, 00:16:36.278 "write": true, 00:16:36.278 "unmap": true, 00:16:36.278 "flush": true, 00:16:36.278 "reset": true, 00:16:36.278 "nvme_admin": false, 00:16:36.278 "nvme_io": false, 00:16:36.278 "nvme_io_md": false, 00:16:36.278 "write_zeroes": true, 00:16:36.278 "zcopy": true, 00:16:36.278 "get_zone_info": false, 00:16:36.278 "zone_management": false, 00:16:36.278 "zone_append": false, 00:16:36.278 "compare": false, 00:16:36.278 "compare_and_write": false, 00:16:36.278 "abort": true, 00:16:36.278 "seek_hole": false, 00:16:36.278 "seek_data": false, 00:16:36.278 "copy": true, 00:16:36.278 "nvme_iov_md": false 00:16:36.278 }, 00:16:36.278 "memory_domains": [ 00:16:36.278 { 00:16:36.278 "dma_device_id": "system", 00:16:36.278 "dma_device_type": 1 00:16:36.278 }, 00:16:36.278 { 00:16:36.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.278 "dma_device_type": 2 00:16:36.278 } 00:16:36.278 ], 00:16:36.278 "driver_specific": {} 00:16:36.278 } 00:16:36.278 ] 00:16:36.278 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:36.278 06:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:36.278 06:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:36.278 06:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.537 BaseBdev3 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:36.537 06:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.798 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.056 [ 00:16:37.056 { 00:16:37.056 "name": "BaseBdev3", 00:16:37.056 "aliases": [ 00:16:37.056 "f5ebcab6-48bc-11ef-a06c-59ddad71024c" 00:16:37.056 ], 00:16:37.056 "product_name": "Malloc disk", 00:16:37.056 "block_size": 512, 00:16:37.056 "num_blocks": 65536, 00:16:37.056 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:37.056 "assigned_rate_limits": { 00:16:37.056 "rw_ios_per_sec": 0, 00:16:37.056 "rw_mbytes_per_sec": 0, 00:16:37.056 "r_mbytes_per_sec": 0, 00:16:37.056 "w_mbytes_per_sec": 0 00:16:37.056 }, 00:16:37.056 "claimed": false, 00:16:37.056 "zoned": false, 00:16:37.056 "supported_io_types": { 00:16:37.056 "read": true, 00:16:37.056 "write": true, 00:16:37.056 "unmap": true, 00:16:37.056 "flush": true, 00:16:37.056 "reset": true, 00:16:37.056 "nvme_admin": false, 00:16:37.056 "nvme_io": false, 00:16:37.056 "nvme_io_md": false, 00:16:37.056 "write_zeroes": true, 00:16:37.056 "zcopy": true, 00:16:37.056 "get_zone_info": false, 00:16:37.056 "zone_management": false, 00:16:37.056 "zone_append": false, 00:16:37.056 "compare": false, 00:16:37.056 "compare_and_write": false, 00:16:37.056 "abort": true, 00:16:37.056 "seek_hole": false, 00:16:37.056 "seek_data": false, 00:16:37.056 "copy": true, 00:16:37.056 "nvme_iov_md": false 00:16:37.056 }, 00:16:37.056 "memory_domains": [ 00:16:37.056 { 00:16:37.056 "dma_device_id": "system", 00:16:37.056 "dma_device_type": 1 00:16:37.056 }, 00:16:37.056 { 00:16:37.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.056 "dma_device_type": 2 00:16:37.056 } 00:16:37.056 ], 00:16:37.056 "driver_specific": {} 00:16:37.056 } 00:16:37.056 ] 00:16:37.056 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:37.056 06:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:37.056 06:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:37.056 06:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:37.313 BaseBdev4 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:37.313 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.571 06:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:37.829 [ 00:16:37.829 { 00:16:37.829 "name": "BaseBdev4", 00:16:37.829 "aliases": [ 00:16:37.829 "f65a44d5-48bc-11ef-a06c-59ddad71024c" 00:16:37.829 ], 00:16:37.829 "product_name": "Malloc disk", 00:16:37.829 "block_size": 512, 00:16:37.829 "num_blocks": 65536, 00:16:37.829 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:37.829 "assigned_rate_limits": { 00:16:37.829 "rw_ios_per_sec": 0, 00:16:37.829 "rw_mbytes_per_sec": 0, 00:16:37.829 "r_mbytes_per_sec": 0, 00:16:37.829 "w_mbytes_per_sec": 0 00:16:37.829 }, 00:16:37.829 "claimed": false, 00:16:37.829 "zoned": false, 00:16:37.829 "supported_io_types": { 00:16:37.829 "read": true, 00:16:37.829 "write": true, 00:16:37.829 "unmap": true, 00:16:37.829 "flush": true, 00:16:37.829 "reset": true, 00:16:37.829 "nvme_admin": false, 00:16:37.829 "nvme_io": false, 00:16:37.829 "nvme_io_md": false, 00:16:37.829 "write_zeroes": true, 00:16:37.829 "zcopy": true, 00:16:37.829 "get_zone_info": false, 00:16:37.829 "zone_management": false, 00:16:37.829 "zone_append": false, 00:16:37.829 "compare": false, 00:16:37.829 "compare_and_write": false, 00:16:37.829 "abort": true, 00:16:37.829 "seek_hole": false, 00:16:37.829 "seek_data": false, 00:16:37.829 "copy": true, 00:16:37.829 "nvme_iov_md": false 00:16:37.829 }, 00:16:37.829 "memory_domains": [ 00:16:37.829 { 00:16:37.829 "dma_device_id": "system", 00:16:37.829 "dma_device_type": 1 00:16:37.829 }, 00:16:37.829 { 00:16:37.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.829 "dma_device_type": 2 00:16:37.829 } 00:16:37.829 ], 00:16:37.829 "driver_specific": {} 00:16:37.829 } 00:16:37.829 ] 00:16:37.829 06:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:37.829 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:37.829 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:37.829 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:38.106 [2024-07-23 06:29:50.445754] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.106 [2024-07-23 06:29:50.445824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.106 [2024-07-23 06:29:50.445834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.106 [2024-07-23 06:29:50.446402] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.106 [2024-07-23 06:29:50.446421] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.106 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.374 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.374 "name": "Existed_Raid", 00:16:38.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.374 "strip_size_kb": 64, 00:16:38.374 "state": "configuring", 00:16:38.374 "raid_level": "concat", 00:16:38.374 "superblock": false, 00:16:38.374 "num_base_bdevs": 4, 00:16:38.374 "num_base_bdevs_discovered": 3, 00:16:38.374 "num_base_bdevs_operational": 4, 00:16:38.374 "base_bdevs_list": [ 00:16:38.374 { 00:16:38.374 "name": "BaseBdev1", 00:16:38.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.374 "is_configured": false, 00:16:38.374 "data_offset": 0, 00:16:38.374 "data_size": 0 00:16:38.374 }, 00:16:38.374 { 00:16:38.374 "name": "BaseBdev2", 00:16:38.374 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:38.374 "is_configured": true, 00:16:38.374 "data_offset": 0, 00:16:38.374 "data_size": 65536 00:16:38.374 }, 00:16:38.374 { 00:16:38.374 "name": "BaseBdev3", 00:16:38.374 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:38.374 "is_configured": true, 00:16:38.374 "data_offset": 0, 00:16:38.374 "data_size": 65536 00:16:38.374 }, 00:16:38.374 { 00:16:38.374 "name": "BaseBdev4", 00:16:38.374 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:38.374 "is_configured": true, 00:16:38.374 "data_offset": 0, 00:16:38.374 "data_size": 65536 00:16:38.375 } 00:16:38.375 ] 00:16:38.375 }' 00:16:38.375 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.375 06:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.632 06:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:38.891 [2024-07-23 06:29:51.217779] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.891 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.150 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.150 "name": "Existed_Raid", 00:16:39.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.150 "strip_size_kb": 64, 00:16:39.150 "state": "configuring", 00:16:39.150 "raid_level": "concat", 00:16:39.150 "superblock": false, 00:16:39.150 "num_base_bdevs": 4, 00:16:39.150 "num_base_bdevs_discovered": 2, 00:16:39.150 "num_base_bdevs_operational": 4, 00:16:39.150 "base_bdevs_list": [ 00:16:39.150 { 00:16:39.150 "name": "BaseBdev1", 00:16:39.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.150 "is_configured": false, 00:16:39.150 "data_offset": 0, 00:16:39.150 "data_size": 0 00:16:39.150 }, 00:16:39.150 { 00:16:39.150 "name": null, 00:16:39.150 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:39.150 "is_configured": false, 00:16:39.150 "data_offset": 0, 00:16:39.150 "data_size": 65536 00:16:39.150 }, 00:16:39.150 { 00:16:39.150 "name": "BaseBdev3", 00:16:39.150 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:39.150 "is_configured": true, 00:16:39.150 "data_offset": 0, 00:16:39.150 "data_size": 65536 00:16:39.150 }, 00:16:39.150 { 00:16:39.150 "name": "BaseBdev4", 00:16:39.150 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:39.150 "is_configured": true, 00:16:39.150 "data_offset": 0, 00:16:39.150 "data_size": 65536 00:16:39.150 } 00:16:39.150 ] 00:16:39.150 }' 00:16:39.150 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.150 06:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.408 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.408 06:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:39.667 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:39.667 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.926 [2024-07-23 06:29:52.277961] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.926 BaseBdev1 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.926 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.199 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.457 [ 00:16:40.457 { 00:16:40.457 "name": "BaseBdev1", 00:16:40.457 "aliases": [ 00:16:40.457 "f7f3d562-48bc-11ef-a06c-59ddad71024c" 00:16:40.457 ], 00:16:40.457 "product_name": "Malloc disk", 00:16:40.457 "block_size": 512, 00:16:40.457 "num_blocks": 65536, 00:16:40.457 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:40.457 "assigned_rate_limits": { 00:16:40.457 "rw_ios_per_sec": 0, 00:16:40.457 "rw_mbytes_per_sec": 0, 00:16:40.457 "r_mbytes_per_sec": 0, 00:16:40.457 "w_mbytes_per_sec": 0 00:16:40.457 }, 00:16:40.457 "claimed": true, 00:16:40.457 "claim_type": "exclusive_write", 00:16:40.457 "zoned": false, 00:16:40.457 "supported_io_types": { 00:16:40.457 "read": true, 00:16:40.457 "write": true, 00:16:40.457 "unmap": true, 00:16:40.457 "flush": true, 00:16:40.457 "reset": true, 00:16:40.457 "nvme_admin": false, 00:16:40.457 "nvme_io": false, 00:16:40.457 "nvme_io_md": false, 00:16:40.457 "write_zeroes": true, 00:16:40.457 "zcopy": true, 00:16:40.457 "get_zone_info": false, 00:16:40.457 "zone_management": false, 00:16:40.457 "zone_append": false, 00:16:40.457 "compare": false, 00:16:40.457 "compare_and_write": false, 00:16:40.457 "abort": true, 00:16:40.457 "seek_hole": false, 00:16:40.457 "seek_data": false, 00:16:40.457 "copy": true, 00:16:40.457 "nvme_iov_md": false 00:16:40.457 }, 00:16:40.457 "memory_domains": [ 00:16:40.457 { 00:16:40.457 "dma_device_id": "system", 00:16:40.457 "dma_device_type": 1 00:16:40.457 }, 00:16:40.457 { 00:16:40.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.457 "dma_device_type": 2 00:16:40.457 } 00:16:40.457 ], 00:16:40.457 "driver_specific": {} 00:16:40.457 } 00:16:40.457 ] 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.457 06:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.716 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.716 "name": "Existed_Raid", 00:16:40.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.716 "strip_size_kb": 64, 00:16:40.716 "state": "configuring", 00:16:40.716 "raid_level": "concat", 00:16:40.716 "superblock": false, 00:16:40.716 "num_base_bdevs": 4, 00:16:40.716 "num_base_bdevs_discovered": 3, 00:16:40.716 "num_base_bdevs_operational": 4, 00:16:40.716 "base_bdevs_list": [ 00:16:40.716 { 00:16:40.716 "name": "BaseBdev1", 00:16:40.716 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 0, 00:16:40.716 "data_size": 65536 00:16:40.716 }, 00:16:40.716 { 00:16:40.716 "name": null, 00:16:40.716 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:40.716 "is_configured": false, 00:16:40.716 "data_offset": 0, 00:16:40.716 "data_size": 65536 00:16:40.716 }, 00:16:40.716 { 00:16:40.716 "name": "BaseBdev3", 00:16:40.716 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 0, 00:16:40.716 "data_size": 65536 00:16:40.716 }, 00:16:40.716 { 00:16:40.716 "name": "BaseBdev4", 00:16:40.716 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 0, 00:16:40.716 "data_size": 65536 00:16:40.716 } 00:16:40.716 ] 00:16:40.716 }' 00:16:40.716 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.716 06:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.973 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.973 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.231 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:41.231 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:41.489 [2024-07-23 06:29:53.954051] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.489 06:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.748 06:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.748 "name": "Existed_Raid", 00:16:41.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.748 "strip_size_kb": 64, 00:16:41.748 "state": "configuring", 00:16:41.748 "raid_level": "concat", 00:16:41.748 "superblock": false, 00:16:41.748 "num_base_bdevs": 4, 00:16:41.748 "num_base_bdevs_discovered": 2, 00:16:41.748 "num_base_bdevs_operational": 4, 00:16:41.748 "base_bdevs_list": [ 00:16:41.748 { 00:16:41.748 "name": "BaseBdev1", 00:16:41.748 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:41.748 "is_configured": true, 00:16:41.748 "data_offset": 0, 00:16:41.748 "data_size": 65536 00:16:41.748 }, 00:16:41.748 { 00:16:41.748 "name": null, 00:16:41.748 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:41.748 "is_configured": false, 00:16:41.748 "data_offset": 0, 00:16:41.748 "data_size": 65536 00:16:41.748 }, 00:16:41.748 { 00:16:41.748 "name": null, 00:16:41.748 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:41.748 "is_configured": false, 00:16:41.748 "data_offset": 0, 00:16:41.748 "data_size": 65536 00:16:41.748 }, 00:16:41.748 { 00:16:41.748 "name": "BaseBdev4", 00:16:41.748 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:41.748 "is_configured": true, 00:16:41.748 "data_offset": 0, 00:16:41.748 "data_size": 65536 00:16:41.748 } 00:16:41.748 ] 00:16:41.748 }' 00:16:41.748 06:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.748 06:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.314 06:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.314 06:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.572 06:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:42.572 06:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:42.831 [2024-07-23 06:29:55.162098] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.831 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.089 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.089 "name": "Existed_Raid", 00:16:43.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.089 "strip_size_kb": 64, 00:16:43.089 "state": "configuring", 00:16:43.089 "raid_level": "concat", 00:16:43.089 "superblock": false, 00:16:43.089 "num_base_bdevs": 4, 00:16:43.089 "num_base_bdevs_discovered": 3, 00:16:43.089 "num_base_bdevs_operational": 4, 00:16:43.089 "base_bdevs_list": [ 00:16:43.089 { 00:16:43.089 "name": "BaseBdev1", 00:16:43.090 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:43.090 "is_configured": true, 00:16:43.090 "data_offset": 0, 00:16:43.090 "data_size": 65536 00:16:43.090 }, 00:16:43.090 { 00:16:43.090 "name": null, 00:16:43.090 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:43.090 "is_configured": false, 00:16:43.090 "data_offset": 0, 00:16:43.090 "data_size": 65536 00:16:43.090 }, 00:16:43.090 { 00:16:43.090 "name": "BaseBdev3", 00:16:43.090 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:43.090 "is_configured": true, 00:16:43.090 "data_offset": 0, 00:16:43.090 "data_size": 65536 00:16:43.090 }, 00:16:43.090 { 00:16:43.090 "name": "BaseBdev4", 00:16:43.090 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:43.090 "is_configured": true, 00:16:43.090 "data_offset": 0, 00:16:43.090 "data_size": 65536 00:16:43.090 } 00:16:43.090 ] 00:16:43.090 }' 00:16:43.090 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.090 06:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.348 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.348 06:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.914 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:43.914 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:43.914 [2024-07-23 06:29:56.426245] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.172 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.431 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.431 "name": "Existed_Raid", 00:16:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.431 "strip_size_kb": 64, 00:16:44.431 "state": "configuring", 00:16:44.431 "raid_level": "concat", 00:16:44.431 "superblock": false, 00:16:44.431 "num_base_bdevs": 4, 00:16:44.431 "num_base_bdevs_discovered": 2, 00:16:44.431 "num_base_bdevs_operational": 4, 00:16:44.431 "base_bdevs_list": [ 00:16:44.431 { 00:16:44.431 "name": null, 00:16:44.431 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:44.431 "is_configured": false, 00:16:44.431 "data_offset": 0, 00:16:44.431 "data_size": 65536 00:16:44.431 }, 00:16:44.431 { 00:16:44.431 "name": null, 00:16:44.431 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:44.431 "is_configured": false, 00:16:44.431 "data_offset": 0, 00:16:44.431 "data_size": 65536 00:16:44.431 }, 00:16:44.431 { 00:16:44.431 "name": "BaseBdev3", 00:16:44.431 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:44.431 "is_configured": true, 00:16:44.431 "data_offset": 0, 00:16:44.431 "data_size": 65536 00:16:44.431 }, 00:16:44.431 { 00:16:44.431 "name": "BaseBdev4", 00:16:44.431 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:44.431 "is_configured": true, 00:16:44.431 "data_offset": 0, 00:16:44.431 "data_size": 65536 00:16:44.431 } 00:16:44.431 ] 00:16:44.431 }' 00:16:44.431 06:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.431 06:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.689 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.689 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.948 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:44.948 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:45.207 [2024-07-23 06:29:57.724612] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.466 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.467 06:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.728 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.728 "name": "Existed_Raid", 00:16:45.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.728 "strip_size_kb": 64, 00:16:45.728 "state": "configuring", 00:16:45.728 "raid_level": "concat", 00:16:45.728 "superblock": false, 00:16:45.728 "num_base_bdevs": 4, 00:16:45.728 "num_base_bdevs_discovered": 3, 00:16:45.728 "num_base_bdevs_operational": 4, 00:16:45.728 "base_bdevs_list": [ 00:16:45.728 { 00:16:45.728 "name": null, 00:16:45.728 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:45.728 "is_configured": false, 00:16:45.728 "data_offset": 0, 00:16:45.728 "data_size": 65536 00:16:45.728 }, 00:16:45.728 { 00:16:45.728 "name": "BaseBdev2", 00:16:45.728 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:45.728 "is_configured": true, 00:16:45.728 "data_offset": 0, 00:16:45.728 "data_size": 65536 00:16:45.728 }, 00:16:45.728 { 00:16:45.728 "name": "BaseBdev3", 00:16:45.728 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:45.728 "is_configured": true, 00:16:45.728 "data_offset": 0, 00:16:45.728 "data_size": 65536 00:16:45.728 }, 00:16:45.728 { 00:16:45.728 "name": "BaseBdev4", 00:16:45.728 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:45.728 "is_configured": true, 00:16:45.728 "data_offset": 0, 00:16:45.728 "data_size": 65536 00:16:45.728 } 00:16:45.728 ] 00:16:45.728 }' 00:16:45.728 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.728 06:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.992 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.992 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:46.259 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:46.259 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.259 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:46.529 06:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f7f3d562-48bc-11ef-a06c-59ddad71024c 00:16:46.801 [2024-07-23 06:29:59.184775] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:46.801 [2024-07-23 06:29:59.184802] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe270e034f00 00:16:46.801 [2024-07-23 06:29:59.184824] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:46.801 [2024-07-23 06:29:59.184866] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe270e097e20 00:16:46.801 [2024-07-23 06:29:59.184940] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe270e034f00 00:16:46.801 [2024-07-23 06:29:59.184945] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe270e034f00 00:16:46.801 [2024-07-23 06:29:59.184979] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.801 NewBaseBdev 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:46.801 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:47.074 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:47.349 [ 00:16:47.349 { 00:16:47.349 "name": "NewBaseBdev", 00:16:47.349 "aliases": [ 00:16:47.349 "f7f3d562-48bc-11ef-a06c-59ddad71024c" 00:16:47.349 ], 00:16:47.349 "product_name": "Malloc disk", 00:16:47.349 "block_size": 512, 00:16:47.349 "num_blocks": 65536, 00:16:47.349 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:47.349 "assigned_rate_limits": { 00:16:47.349 "rw_ios_per_sec": 0, 00:16:47.349 "rw_mbytes_per_sec": 0, 00:16:47.349 "r_mbytes_per_sec": 0, 00:16:47.350 "w_mbytes_per_sec": 0 00:16:47.350 }, 00:16:47.350 "claimed": true, 00:16:47.350 "claim_type": "exclusive_write", 00:16:47.350 "zoned": false, 00:16:47.350 "supported_io_types": { 00:16:47.350 "read": true, 00:16:47.350 "write": true, 00:16:47.350 "unmap": true, 00:16:47.350 "flush": true, 00:16:47.350 "reset": true, 00:16:47.350 "nvme_admin": false, 00:16:47.350 "nvme_io": false, 00:16:47.350 "nvme_io_md": false, 00:16:47.350 "write_zeroes": true, 00:16:47.350 "zcopy": true, 00:16:47.350 "get_zone_info": false, 00:16:47.350 "zone_management": false, 00:16:47.350 "zone_append": false, 00:16:47.350 "compare": false, 00:16:47.350 "compare_and_write": false, 00:16:47.350 "abort": true, 00:16:47.350 "seek_hole": false, 00:16:47.350 "seek_data": false, 00:16:47.350 "copy": true, 00:16:47.350 "nvme_iov_md": false 00:16:47.350 }, 00:16:47.350 "memory_domains": [ 00:16:47.350 { 00:16:47.350 "dma_device_id": "system", 00:16:47.350 "dma_device_type": 1 00:16:47.350 }, 00:16:47.350 { 00:16:47.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.350 "dma_device_type": 2 00:16:47.350 } 00:16:47.350 ], 00:16:47.350 "driver_specific": {} 00:16:47.350 } 00:16:47.350 ] 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.350 06:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.666 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.666 "name": "Existed_Raid", 00:16:47.666 "uuid": "fc11c18e-48bc-11ef-a06c-59ddad71024c", 00:16:47.666 "strip_size_kb": 64, 00:16:47.666 "state": "online", 00:16:47.666 "raid_level": "concat", 00:16:47.666 "superblock": false, 00:16:47.666 "num_base_bdevs": 4, 00:16:47.666 "num_base_bdevs_discovered": 4, 00:16:47.666 "num_base_bdevs_operational": 4, 00:16:47.666 "base_bdevs_list": [ 00:16:47.666 { 00:16:47.666 "name": "NewBaseBdev", 00:16:47.666 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:47.666 "is_configured": true, 00:16:47.666 "data_offset": 0, 00:16:47.666 "data_size": 65536 00:16:47.666 }, 00:16:47.666 { 00:16:47.666 "name": "BaseBdev2", 00:16:47.666 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:47.666 "is_configured": true, 00:16:47.666 "data_offset": 0, 00:16:47.666 "data_size": 65536 00:16:47.666 }, 00:16:47.666 { 00:16:47.666 "name": "BaseBdev3", 00:16:47.666 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:47.666 "is_configured": true, 00:16:47.666 "data_offset": 0, 00:16:47.666 "data_size": 65536 00:16:47.666 }, 00:16:47.666 { 00:16:47.666 "name": "BaseBdev4", 00:16:47.666 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:47.666 "is_configured": true, 00:16:47.666 "data_offset": 0, 00:16:47.666 "data_size": 65536 00:16:47.666 } 00:16:47.666 ] 00:16:47.666 }' 00:16:47.666 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.666 06:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:47.925 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:48.183 [2024-07-23 06:30:00.688753] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.183 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:48.183 "name": "Existed_Raid", 00:16:48.183 "aliases": [ 00:16:48.183 "fc11c18e-48bc-11ef-a06c-59ddad71024c" 00:16:48.183 ], 00:16:48.183 "product_name": "Raid Volume", 00:16:48.183 "block_size": 512, 00:16:48.183 "num_blocks": 262144, 00:16:48.183 "uuid": "fc11c18e-48bc-11ef-a06c-59ddad71024c", 00:16:48.183 "assigned_rate_limits": { 00:16:48.183 "rw_ios_per_sec": 0, 00:16:48.183 "rw_mbytes_per_sec": 0, 00:16:48.183 "r_mbytes_per_sec": 0, 00:16:48.183 "w_mbytes_per_sec": 0 00:16:48.183 }, 00:16:48.183 "claimed": false, 00:16:48.183 "zoned": false, 00:16:48.183 "supported_io_types": { 00:16:48.183 "read": true, 00:16:48.183 "write": true, 00:16:48.183 "unmap": true, 00:16:48.183 "flush": true, 00:16:48.183 "reset": true, 00:16:48.183 "nvme_admin": false, 00:16:48.183 "nvme_io": false, 00:16:48.183 "nvme_io_md": false, 00:16:48.183 "write_zeroes": true, 00:16:48.183 "zcopy": false, 00:16:48.183 "get_zone_info": false, 00:16:48.183 "zone_management": false, 00:16:48.183 "zone_append": false, 00:16:48.183 "compare": false, 00:16:48.183 "compare_and_write": false, 00:16:48.183 "abort": false, 00:16:48.183 "seek_hole": false, 00:16:48.183 "seek_data": false, 00:16:48.183 "copy": false, 00:16:48.183 "nvme_iov_md": false 00:16:48.183 }, 00:16:48.183 "memory_domains": [ 00:16:48.183 { 00:16:48.183 "dma_device_id": "system", 00:16:48.183 "dma_device_type": 1 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.183 "dma_device_type": 2 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "system", 00:16:48.183 "dma_device_type": 1 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.183 "dma_device_type": 2 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "system", 00:16:48.183 "dma_device_type": 1 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.183 "dma_device_type": 2 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "system", 00:16:48.183 "dma_device_type": 1 00:16:48.183 }, 00:16:48.183 { 00:16:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.183 "dma_device_type": 2 00:16:48.183 } 00:16:48.183 ], 00:16:48.183 "driver_specific": { 00:16:48.183 "raid": { 00:16:48.183 "uuid": "fc11c18e-48bc-11ef-a06c-59ddad71024c", 00:16:48.183 "strip_size_kb": 64, 00:16:48.183 "state": "online", 00:16:48.183 "raid_level": "concat", 00:16:48.183 "superblock": false, 00:16:48.183 "num_base_bdevs": 4, 00:16:48.183 "num_base_bdevs_discovered": 4, 00:16:48.183 "num_base_bdevs_operational": 4, 00:16:48.183 "base_bdevs_list": [ 00:16:48.183 { 00:16:48.183 "name": "NewBaseBdev", 00:16:48.184 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:48.184 "is_configured": true, 00:16:48.184 "data_offset": 0, 00:16:48.184 "data_size": 65536 00:16:48.184 }, 00:16:48.184 { 00:16:48.184 "name": "BaseBdev2", 00:16:48.184 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:48.184 "is_configured": true, 00:16:48.184 "data_offset": 0, 00:16:48.184 "data_size": 65536 00:16:48.184 }, 00:16:48.184 { 00:16:48.184 "name": "BaseBdev3", 00:16:48.184 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:48.184 "is_configured": true, 00:16:48.184 "data_offset": 0, 00:16:48.184 "data_size": 65536 00:16:48.184 }, 00:16:48.184 { 00:16:48.184 "name": "BaseBdev4", 00:16:48.184 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:48.184 "is_configured": true, 00:16:48.184 "data_offset": 0, 00:16:48.184 "data_size": 65536 00:16:48.184 } 00:16:48.184 ] 00:16:48.184 } 00:16:48.184 } 00:16:48.184 }' 00:16:48.442 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.442 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:48.442 BaseBdev2 00:16:48.442 BaseBdev3 00:16:48.442 BaseBdev4' 00:16:48.442 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.442 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.442 06:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:48.700 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:48.700 "name": "NewBaseBdev", 00:16:48.700 "aliases": [ 00:16:48.700 "f7f3d562-48bc-11ef-a06c-59ddad71024c" 00:16:48.700 ], 00:16:48.700 "product_name": "Malloc disk", 00:16:48.700 "block_size": 512, 00:16:48.700 "num_blocks": 65536, 00:16:48.700 "uuid": "f7f3d562-48bc-11ef-a06c-59ddad71024c", 00:16:48.700 "assigned_rate_limits": { 00:16:48.700 "rw_ios_per_sec": 0, 00:16:48.700 "rw_mbytes_per_sec": 0, 00:16:48.700 "r_mbytes_per_sec": 0, 00:16:48.700 "w_mbytes_per_sec": 0 00:16:48.700 }, 00:16:48.700 "claimed": true, 00:16:48.700 "claim_type": "exclusive_write", 00:16:48.700 "zoned": false, 00:16:48.700 "supported_io_types": { 00:16:48.700 "read": true, 00:16:48.700 "write": true, 00:16:48.700 "unmap": true, 00:16:48.701 "flush": true, 00:16:48.701 "reset": true, 00:16:48.701 "nvme_admin": false, 00:16:48.701 "nvme_io": false, 00:16:48.701 "nvme_io_md": false, 00:16:48.701 "write_zeroes": true, 00:16:48.701 "zcopy": true, 00:16:48.701 "get_zone_info": false, 00:16:48.701 "zone_management": false, 00:16:48.701 "zone_append": false, 00:16:48.701 "compare": false, 00:16:48.701 "compare_and_write": false, 00:16:48.701 "abort": true, 00:16:48.701 "seek_hole": false, 00:16:48.701 "seek_data": false, 00:16:48.701 "copy": true, 00:16:48.701 "nvme_iov_md": false 00:16:48.701 }, 00:16:48.701 "memory_domains": [ 00:16:48.701 { 00:16:48.701 "dma_device_id": "system", 00:16:48.701 "dma_device_type": 1 00:16:48.701 }, 00:16:48.701 { 00:16:48.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.701 "dma_device_type": 2 00:16:48.701 } 00:16:48.701 ], 00:16:48.701 "driver_specific": {} 00:16:48.701 }' 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:48.701 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:48.959 "name": "BaseBdev2", 00:16:48.959 "aliases": [ 00:16:48.959 "f56eaa62-48bc-11ef-a06c-59ddad71024c" 00:16:48.959 ], 00:16:48.959 "product_name": "Malloc disk", 00:16:48.959 "block_size": 512, 00:16:48.959 "num_blocks": 65536, 00:16:48.959 "uuid": "f56eaa62-48bc-11ef-a06c-59ddad71024c", 00:16:48.959 "assigned_rate_limits": { 00:16:48.959 "rw_ios_per_sec": 0, 00:16:48.959 "rw_mbytes_per_sec": 0, 00:16:48.959 "r_mbytes_per_sec": 0, 00:16:48.959 "w_mbytes_per_sec": 0 00:16:48.959 }, 00:16:48.959 "claimed": true, 00:16:48.959 "claim_type": "exclusive_write", 00:16:48.959 "zoned": false, 00:16:48.959 "supported_io_types": { 00:16:48.959 "read": true, 00:16:48.959 "write": true, 00:16:48.959 "unmap": true, 00:16:48.959 "flush": true, 00:16:48.959 "reset": true, 00:16:48.959 "nvme_admin": false, 00:16:48.959 "nvme_io": false, 00:16:48.959 "nvme_io_md": false, 00:16:48.959 "write_zeroes": true, 00:16:48.959 "zcopy": true, 00:16:48.959 "get_zone_info": false, 00:16:48.959 "zone_management": false, 00:16:48.959 "zone_append": false, 00:16:48.959 "compare": false, 00:16:48.959 "compare_and_write": false, 00:16:48.959 "abort": true, 00:16:48.959 "seek_hole": false, 00:16:48.959 "seek_data": false, 00:16:48.959 "copy": true, 00:16:48.959 "nvme_iov_md": false 00:16:48.959 }, 00:16:48.959 "memory_domains": [ 00:16:48.959 { 00:16:48.959 "dma_device_id": "system", 00:16:48.959 "dma_device_type": 1 00:16:48.959 }, 00:16:48.959 { 00:16:48.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.959 "dma_device_type": 2 00:16:48.959 } 00:16:48.959 ], 00:16:48.959 "driver_specific": {} 00:16:48.959 }' 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:48.959 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.527 "name": "BaseBdev3", 00:16:49.527 "aliases": [ 00:16:49.527 "f5ebcab6-48bc-11ef-a06c-59ddad71024c" 00:16:49.527 ], 00:16:49.527 "product_name": "Malloc disk", 00:16:49.527 "block_size": 512, 00:16:49.527 "num_blocks": 65536, 00:16:49.527 "uuid": "f5ebcab6-48bc-11ef-a06c-59ddad71024c", 00:16:49.527 "assigned_rate_limits": { 00:16:49.527 "rw_ios_per_sec": 0, 00:16:49.527 "rw_mbytes_per_sec": 0, 00:16:49.527 "r_mbytes_per_sec": 0, 00:16:49.527 "w_mbytes_per_sec": 0 00:16:49.527 }, 00:16:49.527 "claimed": true, 00:16:49.527 "claim_type": "exclusive_write", 00:16:49.527 "zoned": false, 00:16:49.527 "supported_io_types": { 00:16:49.527 "read": true, 00:16:49.527 "write": true, 00:16:49.527 "unmap": true, 00:16:49.527 "flush": true, 00:16:49.527 "reset": true, 00:16:49.527 "nvme_admin": false, 00:16:49.527 "nvme_io": false, 00:16:49.527 "nvme_io_md": false, 00:16:49.527 "write_zeroes": true, 00:16:49.527 "zcopy": true, 00:16:49.527 "get_zone_info": false, 00:16:49.527 "zone_management": false, 00:16:49.527 "zone_append": false, 00:16:49.527 "compare": false, 00:16:49.527 "compare_and_write": false, 00:16:49.527 "abort": true, 00:16:49.527 "seek_hole": false, 00:16:49.527 "seek_data": false, 00:16:49.527 "copy": true, 00:16:49.527 "nvme_iov_md": false 00:16:49.527 }, 00:16:49.527 "memory_domains": [ 00:16:49.527 { 00:16:49.527 "dma_device_id": "system", 00:16:49.527 "dma_device_type": 1 00:16:49.527 }, 00:16:49.527 { 00:16:49.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.527 "dma_device_type": 2 00:16:49.527 } 00:16:49.527 ], 00:16:49.527 "driver_specific": {} 00:16:49.527 }' 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:49.527 06:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.786 "name": "BaseBdev4", 00:16:49.786 "aliases": [ 00:16:49.786 "f65a44d5-48bc-11ef-a06c-59ddad71024c" 00:16:49.786 ], 00:16:49.786 "product_name": "Malloc disk", 00:16:49.786 "block_size": 512, 00:16:49.786 "num_blocks": 65536, 00:16:49.786 "uuid": "f65a44d5-48bc-11ef-a06c-59ddad71024c", 00:16:49.786 "assigned_rate_limits": { 00:16:49.786 "rw_ios_per_sec": 0, 00:16:49.786 "rw_mbytes_per_sec": 0, 00:16:49.786 "r_mbytes_per_sec": 0, 00:16:49.786 "w_mbytes_per_sec": 0 00:16:49.786 }, 00:16:49.786 "claimed": true, 00:16:49.786 "claim_type": "exclusive_write", 00:16:49.786 "zoned": false, 00:16:49.786 "supported_io_types": { 00:16:49.786 "read": true, 00:16:49.786 "write": true, 00:16:49.786 "unmap": true, 00:16:49.786 "flush": true, 00:16:49.786 "reset": true, 00:16:49.786 "nvme_admin": false, 00:16:49.786 "nvme_io": false, 00:16:49.786 "nvme_io_md": false, 00:16:49.786 "write_zeroes": true, 00:16:49.786 "zcopy": true, 00:16:49.786 "get_zone_info": false, 00:16:49.786 "zone_management": false, 00:16:49.786 "zone_append": false, 00:16:49.786 "compare": false, 00:16:49.786 "compare_and_write": false, 00:16:49.786 "abort": true, 00:16:49.786 "seek_hole": false, 00:16:49.786 "seek_data": false, 00:16:49.786 "copy": true, 00:16:49.786 "nvme_iov_md": false 00:16:49.786 }, 00:16:49.786 "memory_domains": [ 00:16:49.786 { 00:16:49.786 "dma_device_id": "system", 00:16:49.786 "dma_device_type": 1 00:16:49.786 }, 00:16:49.786 { 00:16:49.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.786 "dma_device_type": 2 00:16:49.786 } 00:16:49.786 ], 00:16:49.786 "driver_specific": {} 00:16:49.786 }' 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.786 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:50.045 [2024-07-23 06:30:02.436792] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.045 [2024-07-23 06:30:02.436829] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.045 [2024-07-23 06:30:02.436855] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.045 [2024-07-23 06:30:02.436872] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.045 [2024-07-23 06:30:02.436876] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe270e034f00 name Existed_Raid, state offline 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60735 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60735 ']' 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60735 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60735 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:50.045 killing process with pid 60735 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60735' 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60735 00:16:50.045 [2024-07-23 06:30:02.468140] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.045 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60735 00:16:50.045 [2024-07-23 06:30:02.492380] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:50.303 00:16:50.303 real 0m28.847s 00:16:50.303 user 0m52.994s 00:16:50.303 sys 0m3.864s 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.303 ************************************ 00:16:50.303 END TEST raid_state_function_test 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.303 ************************************ 00:16:50.303 06:30:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:50.303 06:30:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:50.303 06:30:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:50.303 06:30:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.303 06:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.303 ************************************ 00:16:50.303 START TEST raid_state_function_test_sb 00:16:50.303 ************************************ 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:50.303 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61562 00:16:50.304 Process raid pid: 61562 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61562' 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61562 /var/tmp/spdk-raid.sock 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61562 ']' 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.304 06:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.304 [2024-07-23 06:30:02.738920] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:50.304 [2024-07-23 06:30:02.739165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:50.870 EAL: TSC is not safe to use in SMP mode 00:16:50.870 EAL: TSC is not invariant 00:16:50.870 [2024-07-23 06:30:03.289127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.870 [2024-07-23 06:30:03.386711] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:50.870 [2024-07-23 06:30:03.389577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.870 [2024-07-23 06:30:03.390687] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.870 [2024-07-23 06:30:03.390708] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.436 06:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.436 06:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:51.436 06:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:51.694 [2024-07-23 06:30:04.092817] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.694 [2024-07-23 06:30:04.092921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.694 [2024-07-23 06:30:04.092942] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.694 [2024-07-23 06:30:04.092950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.694 [2024-07-23 06:30:04.092954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.694 [2024-07-23 06:30:04.092960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.694 [2024-07-23 06:30:04.092964] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.694 [2024-07-23 06:30:04.092970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.694 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.952 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.952 "name": "Existed_Raid", 00:16:51.952 "uuid": "fefea758-48bc-11ef-a06c-59ddad71024c", 00:16:51.952 "strip_size_kb": 64, 00:16:51.952 "state": "configuring", 00:16:51.952 "raid_level": "concat", 00:16:51.952 "superblock": true, 00:16:51.952 "num_base_bdevs": 4, 00:16:51.952 "num_base_bdevs_discovered": 0, 00:16:51.952 "num_base_bdevs_operational": 4, 00:16:51.952 "base_bdevs_list": [ 00:16:51.952 { 00:16:51.952 "name": "BaseBdev1", 00:16:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.952 "is_configured": false, 00:16:51.952 "data_offset": 0, 00:16:51.952 "data_size": 0 00:16:51.952 }, 00:16:51.952 { 00:16:51.952 "name": "BaseBdev2", 00:16:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.952 "is_configured": false, 00:16:51.952 "data_offset": 0, 00:16:51.952 "data_size": 0 00:16:51.952 }, 00:16:51.952 { 00:16:51.952 "name": "BaseBdev3", 00:16:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.952 "is_configured": false, 00:16:51.952 "data_offset": 0, 00:16:51.952 "data_size": 0 00:16:51.952 }, 00:16:51.952 { 00:16:51.952 "name": "BaseBdev4", 00:16:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.952 "is_configured": false, 00:16:51.952 "data_offset": 0, 00:16:51.952 "data_size": 0 00:16:51.952 } 00:16:51.952 ] 00:16:51.952 }' 00:16:51.952 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.952 06:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.520 06:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.520 [2024-07-23 06:30:05.012811] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.520 [2024-07-23 06:30:05.012845] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aaa00034500 name Existed_Raid, state configuring 00:16:52.520 06:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:53.117 [2024-07-23 06:30:05.324828] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.117 [2024-07-23 06:30:05.324882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.117 [2024-07-23 06:30:05.324902] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.117 [2024-07-23 06:30:05.324931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.117 [2024-07-23 06:30:05.324949] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.117 [2024-07-23 06:30:05.324956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.117 [2024-07-23 06:30:05.324960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.117 [2024-07-23 06:30:05.324966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.117 [2024-07-23 06:30:05.573901] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.117 BaseBdev1 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:53.117 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.375 06:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.632 [ 00:16:53.632 { 00:16:53.632 "name": "BaseBdev1", 00:16:53.632 "aliases": [ 00:16:53.632 "ffe07cdf-48bc-11ef-a06c-59ddad71024c" 00:16:53.632 ], 00:16:53.632 "product_name": "Malloc disk", 00:16:53.633 "block_size": 512, 00:16:53.633 "num_blocks": 65536, 00:16:53.633 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:53.633 "assigned_rate_limits": { 00:16:53.633 "rw_ios_per_sec": 0, 00:16:53.633 "rw_mbytes_per_sec": 0, 00:16:53.633 "r_mbytes_per_sec": 0, 00:16:53.633 "w_mbytes_per_sec": 0 00:16:53.633 }, 00:16:53.633 "claimed": true, 00:16:53.633 "claim_type": "exclusive_write", 00:16:53.633 "zoned": false, 00:16:53.633 "supported_io_types": { 00:16:53.633 "read": true, 00:16:53.633 "write": true, 00:16:53.633 "unmap": true, 00:16:53.633 "flush": true, 00:16:53.633 "reset": true, 00:16:53.633 "nvme_admin": false, 00:16:53.633 "nvme_io": false, 00:16:53.633 "nvme_io_md": false, 00:16:53.633 "write_zeroes": true, 00:16:53.633 "zcopy": true, 00:16:53.633 "get_zone_info": false, 00:16:53.633 "zone_management": false, 00:16:53.633 "zone_append": false, 00:16:53.633 "compare": false, 00:16:53.633 "compare_and_write": false, 00:16:53.633 "abort": true, 00:16:53.633 "seek_hole": false, 00:16:53.633 "seek_data": false, 00:16:53.633 "copy": true, 00:16:53.633 "nvme_iov_md": false 00:16:53.633 }, 00:16:53.633 "memory_domains": [ 00:16:53.633 { 00:16:53.633 "dma_device_id": "system", 00:16:53.633 "dma_device_type": 1 00:16:53.633 }, 00:16:53.633 { 00:16:53.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.633 "dma_device_type": 2 00:16:53.633 } 00:16:53.633 ], 00:16:53.633 "driver_specific": {} 00:16:53.633 } 00:16:53.633 ] 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.633 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.198 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.198 "name": "Existed_Raid", 00:16:54.198 "uuid": "ffbaa4d4-48bc-11ef-a06c-59ddad71024c", 00:16:54.198 "strip_size_kb": 64, 00:16:54.199 "state": "configuring", 00:16:54.199 "raid_level": "concat", 00:16:54.199 "superblock": true, 00:16:54.199 "num_base_bdevs": 4, 00:16:54.199 "num_base_bdevs_discovered": 1, 00:16:54.199 "num_base_bdevs_operational": 4, 00:16:54.199 "base_bdevs_list": [ 00:16:54.199 { 00:16:54.199 "name": "BaseBdev1", 00:16:54.199 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:54.199 "is_configured": true, 00:16:54.199 "data_offset": 2048, 00:16:54.199 "data_size": 63488 00:16:54.199 }, 00:16:54.199 { 00:16:54.199 "name": "BaseBdev2", 00:16:54.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.199 "is_configured": false, 00:16:54.199 "data_offset": 0, 00:16:54.199 "data_size": 0 00:16:54.199 }, 00:16:54.199 { 00:16:54.199 "name": "BaseBdev3", 00:16:54.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.199 "is_configured": false, 00:16:54.199 "data_offset": 0, 00:16:54.199 "data_size": 0 00:16:54.199 }, 00:16:54.199 { 00:16:54.199 "name": "BaseBdev4", 00:16:54.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.199 "is_configured": false, 00:16:54.199 "data_offset": 0, 00:16:54.199 "data_size": 0 00:16:54.199 } 00:16:54.199 ] 00:16:54.199 }' 00:16:54.199 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.199 06:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 06:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.715 [2024-07-23 06:30:07.132871] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.715 [2024-07-23 06:30:07.132906] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aaa00034500 name Existed_Raid, state configuring 00:16:54.716 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:54.974 [2024-07-23 06:30:07.424895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.974 [2024-07-23 06:30:07.425714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.974 [2024-07-23 06:30:07.425755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.974 [2024-07-23 06:30:07.425761] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.974 [2024-07-23 06:30:07.425770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.974 [2024-07-23 06:30:07.425773] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.974 [2024-07-23 06:30:07.425781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.974 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.238 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.238 "name": "Existed_Raid", 00:16:55.238 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:16:55.238 "strip_size_kb": 64, 00:16:55.238 "state": "configuring", 00:16:55.238 "raid_level": "concat", 00:16:55.238 "superblock": true, 00:16:55.238 "num_base_bdevs": 4, 00:16:55.238 "num_base_bdevs_discovered": 1, 00:16:55.238 "num_base_bdevs_operational": 4, 00:16:55.238 "base_bdevs_list": [ 00:16:55.238 { 00:16:55.238 "name": "BaseBdev1", 00:16:55.238 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:55.238 "is_configured": true, 00:16:55.238 "data_offset": 2048, 00:16:55.238 "data_size": 63488 00:16:55.238 }, 00:16:55.238 { 00:16:55.238 "name": "BaseBdev2", 00:16:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.238 "is_configured": false, 00:16:55.238 "data_offset": 0, 00:16:55.238 "data_size": 0 00:16:55.238 }, 00:16:55.238 { 00:16:55.238 "name": "BaseBdev3", 00:16:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.238 "is_configured": false, 00:16:55.238 "data_offset": 0, 00:16:55.238 "data_size": 0 00:16:55.238 }, 00:16:55.238 { 00:16:55.238 "name": "BaseBdev4", 00:16:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.238 "is_configured": false, 00:16:55.238 "data_offset": 0, 00:16:55.238 "data_size": 0 00:16:55.238 } 00:16:55.238 ] 00:16:55.238 }' 00:16:55.238 06:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.238 06:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.805 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.064 [2024-07-23 06:30:08.357073] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.064 BaseBdev2 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.064 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.323 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.583 [ 00:16:56.583 { 00:16:56.583 "name": "BaseBdev2", 00:16:56.583 "aliases": [ 00:16:56.583 "01894de4-48bd-11ef-a06c-59ddad71024c" 00:16:56.583 ], 00:16:56.583 "product_name": "Malloc disk", 00:16:56.583 "block_size": 512, 00:16:56.583 "num_blocks": 65536, 00:16:56.583 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:16:56.583 "assigned_rate_limits": { 00:16:56.583 "rw_ios_per_sec": 0, 00:16:56.583 "rw_mbytes_per_sec": 0, 00:16:56.583 "r_mbytes_per_sec": 0, 00:16:56.583 "w_mbytes_per_sec": 0 00:16:56.583 }, 00:16:56.583 "claimed": true, 00:16:56.583 "claim_type": "exclusive_write", 00:16:56.583 "zoned": false, 00:16:56.583 "supported_io_types": { 00:16:56.583 "read": true, 00:16:56.583 "write": true, 00:16:56.583 "unmap": true, 00:16:56.583 "flush": true, 00:16:56.583 "reset": true, 00:16:56.583 "nvme_admin": false, 00:16:56.583 "nvme_io": false, 00:16:56.583 "nvme_io_md": false, 00:16:56.583 "write_zeroes": true, 00:16:56.583 "zcopy": true, 00:16:56.583 "get_zone_info": false, 00:16:56.583 "zone_management": false, 00:16:56.583 "zone_append": false, 00:16:56.583 "compare": false, 00:16:56.583 "compare_and_write": false, 00:16:56.583 "abort": true, 00:16:56.583 "seek_hole": false, 00:16:56.583 "seek_data": false, 00:16:56.583 "copy": true, 00:16:56.583 "nvme_iov_md": false 00:16:56.583 }, 00:16:56.583 "memory_domains": [ 00:16:56.583 { 00:16:56.583 "dma_device_id": "system", 00:16:56.583 "dma_device_type": 1 00:16:56.583 }, 00:16:56.583 { 00:16:56.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.583 "dma_device_type": 2 00:16:56.583 } 00:16:56.583 ], 00:16:56.583 "driver_specific": {} 00:16:56.583 } 00:16:56.583 ] 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.583 06:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.842 06:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.842 "name": "Existed_Raid", 00:16:56.842 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:16:56.842 "strip_size_kb": 64, 00:16:56.842 "state": "configuring", 00:16:56.842 "raid_level": "concat", 00:16:56.842 "superblock": true, 00:16:56.842 "num_base_bdevs": 4, 00:16:56.842 "num_base_bdevs_discovered": 2, 00:16:56.842 "num_base_bdevs_operational": 4, 00:16:56.842 "base_bdevs_list": [ 00:16:56.842 { 00:16:56.842 "name": "BaseBdev1", 00:16:56.842 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:56.842 "is_configured": true, 00:16:56.842 "data_offset": 2048, 00:16:56.842 "data_size": 63488 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "BaseBdev2", 00:16:56.842 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:16:56.842 "is_configured": true, 00:16:56.842 "data_offset": 2048, 00:16:56.842 "data_size": 63488 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "BaseBdev3", 00:16:56.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.842 "is_configured": false, 00:16:56.842 "data_offset": 0, 00:16:56.842 "data_size": 0 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "BaseBdev4", 00:16:56.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.842 "is_configured": false, 00:16:56.842 "data_offset": 0, 00:16:56.842 "data_size": 0 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 }' 00:16:56.842 06:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.842 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.118 06:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.387 [2024-07-23 06:30:09.797084] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.387 BaseBdev3 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:57.387 06:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.645 06:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.904 [ 00:16:57.904 { 00:16:57.904 "name": "BaseBdev3", 00:16:57.904 "aliases": [ 00:16:57.904 "02650a18-48bd-11ef-a06c-59ddad71024c" 00:16:57.904 ], 00:16:57.904 "product_name": "Malloc disk", 00:16:57.904 "block_size": 512, 00:16:57.904 "num_blocks": 65536, 00:16:57.904 "uuid": "02650a18-48bd-11ef-a06c-59ddad71024c", 00:16:57.904 "assigned_rate_limits": { 00:16:57.904 "rw_ios_per_sec": 0, 00:16:57.904 "rw_mbytes_per_sec": 0, 00:16:57.904 "r_mbytes_per_sec": 0, 00:16:57.904 "w_mbytes_per_sec": 0 00:16:57.904 }, 00:16:57.904 "claimed": true, 00:16:57.904 "claim_type": "exclusive_write", 00:16:57.904 "zoned": false, 00:16:57.904 "supported_io_types": { 00:16:57.904 "read": true, 00:16:57.904 "write": true, 00:16:57.904 "unmap": true, 00:16:57.904 "flush": true, 00:16:57.904 "reset": true, 00:16:57.904 "nvme_admin": false, 00:16:57.904 "nvme_io": false, 00:16:57.904 "nvme_io_md": false, 00:16:57.904 "write_zeroes": true, 00:16:57.904 "zcopy": true, 00:16:57.904 "get_zone_info": false, 00:16:57.904 "zone_management": false, 00:16:57.904 "zone_append": false, 00:16:57.904 "compare": false, 00:16:57.904 "compare_and_write": false, 00:16:57.904 "abort": true, 00:16:57.904 "seek_hole": false, 00:16:57.904 "seek_data": false, 00:16:57.904 "copy": true, 00:16:57.904 "nvme_iov_md": false 00:16:57.904 }, 00:16:57.904 "memory_domains": [ 00:16:57.904 { 00:16:57.904 "dma_device_id": "system", 00:16:57.904 "dma_device_type": 1 00:16:57.904 }, 00:16:57.904 { 00:16:57.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.904 "dma_device_type": 2 00:16:57.904 } 00:16:57.904 ], 00:16:57.904 "driver_specific": {} 00:16:57.904 } 00:16:57.904 ] 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.904 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.162 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.162 "name": "Existed_Raid", 00:16:58.162 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:16:58.162 "strip_size_kb": 64, 00:16:58.162 "state": "configuring", 00:16:58.162 "raid_level": "concat", 00:16:58.162 "superblock": true, 00:16:58.162 "num_base_bdevs": 4, 00:16:58.162 "num_base_bdevs_discovered": 3, 00:16:58.162 "num_base_bdevs_operational": 4, 00:16:58.162 "base_bdevs_list": [ 00:16:58.162 { 00:16:58.162 "name": "BaseBdev1", 00:16:58.162 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:58.162 "is_configured": true, 00:16:58.162 "data_offset": 2048, 00:16:58.162 "data_size": 63488 00:16:58.163 }, 00:16:58.163 { 00:16:58.163 "name": "BaseBdev2", 00:16:58.163 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:16:58.163 "is_configured": true, 00:16:58.163 "data_offset": 2048, 00:16:58.163 "data_size": 63488 00:16:58.163 }, 00:16:58.163 { 00:16:58.163 "name": "BaseBdev3", 00:16:58.163 "uuid": "02650a18-48bd-11ef-a06c-59ddad71024c", 00:16:58.163 "is_configured": true, 00:16:58.163 "data_offset": 2048, 00:16:58.163 "data_size": 63488 00:16:58.163 }, 00:16:58.163 { 00:16:58.163 "name": "BaseBdev4", 00:16:58.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.163 "is_configured": false, 00:16:58.163 "data_offset": 0, 00:16:58.163 "data_size": 0 00:16:58.163 } 00:16:58.163 ] 00:16:58.163 }' 00:16:58.163 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.163 06:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.420 06:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:58.679 [2024-07-23 06:30:11.085187] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:58.679 [2024-07-23 06:30:11.085271] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2aaa00034a00 00:16:58.679 [2024-07-23 06:30:11.085278] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:58.679 [2024-07-23 06:30:11.085299] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2aaa00097e20 00:16:58.679 [2024-07-23 06:30:11.085354] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2aaa00034a00 00:16:58.679 [2024-07-23 06:30:11.085358] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2aaa00034a00 00:16:58.679 [2024-07-23 06:30:11.085378] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.679 BaseBdev4 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.679 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.955 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:59.213 [ 00:16:59.213 { 00:16:59.213 "name": "BaseBdev4", 00:16:59.213 "aliases": [ 00:16:59.213 "03299637-48bd-11ef-a06c-59ddad71024c" 00:16:59.213 ], 00:16:59.213 "product_name": "Malloc disk", 00:16:59.213 "block_size": 512, 00:16:59.213 "num_blocks": 65536, 00:16:59.213 "uuid": "03299637-48bd-11ef-a06c-59ddad71024c", 00:16:59.213 "assigned_rate_limits": { 00:16:59.213 "rw_ios_per_sec": 0, 00:16:59.213 "rw_mbytes_per_sec": 0, 00:16:59.213 "r_mbytes_per_sec": 0, 00:16:59.213 "w_mbytes_per_sec": 0 00:16:59.213 }, 00:16:59.213 "claimed": true, 00:16:59.213 "claim_type": "exclusive_write", 00:16:59.213 "zoned": false, 00:16:59.213 "supported_io_types": { 00:16:59.213 "read": true, 00:16:59.213 "write": true, 00:16:59.213 "unmap": true, 00:16:59.213 "flush": true, 00:16:59.213 "reset": true, 00:16:59.213 "nvme_admin": false, 00:16:59.213 "nvme_io": false, 00:16:59.213 "nvme_io_md": false, 00:16:59.213 "write_zeroes": true, 00:16:59.213 "zcopy": true, 00:16:59.213 "get_zone_info": false, 00:16:59.213 "zone_management": false, 00:16:59.213 "zone_append": false, 00:16:59.213 "compare": false, 00:16:59.213 "compare_and_write": false, 00:16:59.213 "abort": true, 00:16:59.213 "seek_hole": false, 00:16:59.213 "seek_data": false, 00:16:59.213 "copy": true, 00:16:59.213 "nvme_iov_md": false 00:16:59.213 }, 00:16:59.213 "memory_domains": [ 00:16:59.213 { 00:16:59.213 "dma_device_id": "system", 00:16:59.213 "dma_device_type": 1 00:16:59.213 }, 00:16:59.213 { 00:16:59.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.213 "dma_device_type": 2 00:16:59.213 } 00:16:59.213 ], 00:16:59.213 "driver_specific": {} 00:16:59.213 } 00:16:59.213 ] 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.213 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.471 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.471 "name": "Existed_Raid", 00:16:59.471 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:16:59.471 "strip_size_kb": 64, 00:16:59.471 "state": "online", 00:16:59.471 "raid_level": "concat", 00:16:59.471 "superblock": true, 00:16:59.471 "num_base_bdevs": 4, 00:16:59.471 "num_base_bdevs_discovered": 4, 00:16:59.471 "num_base_bdevs_operational": 4, 00:16:59.471 "base_bdevs_list": [ 00:16:59.471 { 00:16:59.471 "name": "BaseBdev1", 00:16:59.471 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:59.471 "is_configured": true, 00:16:59.471 "data_offset": 2048, 00:16:59.471 "data_size": 63488 00:16:59.471 }, 00:16:59.471 { 00:16:59.471 "name": "BaseBdev2", 00:16:59.471 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:16:59.471 "is_configured": true, 00:16:59.471 "data_offset": 2048, 00:16:59.471 "data_size": 63488 00:16:59.471 }, 00:16:59.471 { 00:16:59.471 "name": "BaseBdev3", 00:16:59.471 "uuid": "02650a18-48bd-11ef-a06c-59ddad71024c", 00:16:59.471 "is_configured": true, 00:16:59.471 "data_offset": 2048, 00:16:59.471 "data_size": 63488 00:16:59.471 }, 00:16:59.471 { 00:16:59.471 "name": "BaseBdev4", 00:16:59.471 "uuid": "03299637-48bd-11ef-a06c-59ddad71024c", 00:16:59.471 "is_configured": true, 00:16:59.471 "data_offset": 2048, 00:16:59.471 "data_size": 63488 00:16:59.471 } 00:16:59.471 ] 00:16:59.471 }' 00:16:59.471 06:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.471 06:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:59.730 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:59.988 [2024-07-23 06:30:12.481215] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.988 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:59.988 "name": "Existed_Raid", 00:16:59.988 "aliases": [ 00:16:59.988 "00fb1690-48bd-11ef-a06c-59ddad71024c" 00:16:59.988 ], 00:16:59.988 "product_name": "Raid Volume", 00:16:59.988 "block_size": 512, 00:16:59.988 "num_blocks": 253952, 00:16:59.988 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:16:59.988 "assigned_rate_limits": { 00:16:59.988 "rw_ios_per_sec": 0, 00:16:59.988 "rw_mbytes_per_sec": 0, 00:16:59.988 "r_mbytes_per_sec": 0, 00:16:59.988 "w_mbytes_per_sec": 0 00:16:59.988 }, 00:16:59.988 "claimed": false, 00:16:59.988 "zoned": false, 00:16:59.988 "supported_io_types": { 00:16:59.988 "read": true, 00:16:59.988 "write": true, 00:16:59.988 "unmap": true, 00:16:59.988 "flush": true, 00:16:59.988 "reset": true, 00:16:59.988 "nvme_admin": false, 00:16:59.988 "nvme_io": false, 00:16:59.988 "nvme_io_md": false, 00:16:59.988 "write_zeroes": true, 00:16:59.988 "zcopy": false, 00:16:59.988 "get_zone_info": false, 00:16:59.988 "zone_management": false, 00:16:59.988 "zone_append": false, 00:16:59.988 "compare": false, 00:16:59.988 "compare_and_write": false, 00:16:59.988 "abort": false, 00:16:59.988 "seek_hole": false, 00:16:59.988 "seek_data": false, 00:16:59.988 "copy": false, 00:16:59.988 "nvme_iov_md": false 00:16:59.988 }, 00:16:59.988 "memory_domains": [ 00:16:59.988 { 00:16:59.988 "dma_device_id": "system", 00:16:59.988 "dma_device_type": 1 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.988 "dma_device_type": 2 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "system", 00:16:59.988 "dma_device_type": 1 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.988 "dma_device_type": 2 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "system", 00:16:59.988 "dma_device_type": 1 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.988 "dma_device_type": 2 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "system", 00:16:59.988 "dma_device_type": 1 00:16:59.988 }, 00:16:59.988 { 00:16:59.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.988 "dma_device_type": 2 00:16:59.988 } 00:16:59.988 ], 00:16:59.988 "driver_specific": { 00:16:59.988 "raid": { 00:16:59.988 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:16:59.988 "strip_size_kb": 64, 00:16:59.988 "state": "online", 00:16:59.988 "raid_level": "concat", 00:16:59.988 "superblock": true, 00:16:59.988 "num_base_bdevs": 4, 00:16:59.988 "num_base_bdevs_discovered": 4, 00:16:59.988 "num_base_bdevs_operational": 4, 00:16:59.988 "base_bdevs_list": [ 00:16:59.988 { 00:16:59.988 "name": "BaseBdev1", 00:16:59.988 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:16:59.988 "is_configured": true, 00:16:59.988 "data_offset": 2048, 00:16:59.989 "data_size": 63488 00:16:59.989 }, 00:16:59.989 { 00:16:59.989 "name": "BaseBdev2", 00:16:59.989 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:16:59.989 "is_configured": true, 00:16:59.989 "data_offset": 2048, 00:16:59.989 "data_size": 63488 00:16:59.989 }, 00:16:59.989 { 00:16:59.989 "name": "BaseBdev3", 00:16:59.989 "uuid": "02650a18-48bd-11ef-a06c-59ddad71024c", 00:16:59.989 "is_configured": true, 00:16:59.989 "data_offset": 2048, 00:16:59.989 "data_size": 63488 00:16:59.989 }, 00:16:59.989 { 00:16:59.989 "name": "BaseBdev4", 00:16:59.989 "uuid": "03299637-48bd-11ef-a06c-59ddad71024c", 00:16:59.989 "is_configured": true, 00:16:59.989 "data_offset": 2048, 00:16:59.989 "data_size": 63488 00:16:59.989 } 00:16:59.989 ] 00:16:59.989 } 00:16:59.989 } 00:16:59.989 }' 00:16:59.989 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.989 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:59.989 BaseBdev2 00:16:59.989 BaseBdev3 00:16:59.989 BaseBdev4' 00:16:59.989 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.989 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:59.989 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.554 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.554 "name": "BaseBdev1", 00:17:00.554 "aliases": [ 00:17:00.554 "ffe07cdf-48bc-11ef-a06c-59ddad71024c" 00:17:00.554 ], 00:17:00.554 "product_name": "Malloc disk", 00:17:00.554 "block_size": 512, 00:17:00.554 "num_blocks": 65536, 00:17:00.554 "uuid": "ffe07cdf-48bc-11ef-a06c-59ddad71024c", 00:17:00.554 "assigned_rate_limits": { 00:17:00.554 "rw_ios_per_sec": 0, 00:17:00.554 "rw_mbytes_per_sec": 0, 00:17:00.554 "r_mbytes_per_sec": 0, 00:17:00.554 "w_mbytes_per_sec": 0 00:17:00.554 }, 00:17:00.554 "claimed": true, 00:17:00.554 "claim_type": "exclusive_write", 00:17:00.554 "zoned": false, 00:17:00.554 "supported_io_types": { 00:17:00.554 "read": true, 00:17:00.554 "write": true, 00:17:00.554 "unmap": true, 00:17:00.554 "flush": true, 00:17:00.554 "reset": true, 00:17:00.554 "nvme_admin": false, 00:17:00.555 "nvme_io": false, 00:17:00.555 "nvme_io_md": false, 00:17:00.555 "write_zeroes": true, 00:17:00.555 "zcopy": true, 00:17:00.555 "get_zone_info": false, 00:17:00.555 "zone_management": false, 00:17:00.555 "zone_append": false, 00:17:00.555 "compare": false, 00:17:00.555 "compare_and_write": false, 00:17:00.555 "abort": true, 00:17:00.555 "seek_hole": false, 00:17:00.555 "seek_data": false, 00:17:00.555 "copy": true, 00:17:00.555 "nvme_iov_md": false 00:17:00.555 }, 00:17:00.555 "memory_domains": [ 00:17:00.555 { 00:17:00.555 "dma_device_id": "system", 00:17:00.555 "dma_device_type": 1 00:17:00.555 }, 00:17:00.555 { 00:17:00.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.555 "dma_device_type": 2 00:17:00.555 } 00:17:00.555 ], 00:17:00.555 "driver_specific": {} 00:17:00.555 }' 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:00.555 06:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.813 "name": "BaseBdev2", 00:17:00.813 "aliases": [ 00:17:00.813 "01894de4-48bd-11ef-a06c-59ddad71024c" 00:17:00.813 ], 00:17:00.813 "product_name": "Malloc disk", 00:17:00.813 "block_size": 512, 00:17:00.813 "num_blocks": 65536, 00:17:00.813 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:17:00.813 "assigned_rate_limits": { 00:17:00.813 "rw_ios_per_sec": 0, 00:17:00.813 "rw_mbytes_per_sec": 0, 00:17:00.813 "r_mbytes_per_sec": 0, 00:17:00.813 "w_mbytes_per_sec": 0 00:17:00.813 }, 00:17:00.813 "claimed": true, 00:17:00.813 "claim_type": "exclusive_write", 00:17:00.813 "zoned": false, 00:17:00.813 "supported_io_types": { 00:17:00.813 "read": true, 00:17:00.813 "write": true, 00:17:00.813 "unmap": true, 00:17:00.813 "flush": true, 00:17:00.813 "reset": true, 00:17:00.813 "nvme_admin": false, 00:17:00.813 "nvme_io": false, 00:17:00.813 "nvme_io_md": false, 00:17:00.813 "write_zeroes": true, 00:17:00.813 "zcopy": true, 00:17:00.813 "get_zone_info": false, 00:17:00.813 "zone_management": false, 00:17:00.813 "zone_append": false, 00:17:00.813 "compare": false, 00:17:00.813 "compare_and_write": false, 00:17:00.813 "abort": true, 00:17:00.813 "seek_hole": false, 00:17:00.813 "seek_data": false, 00:17:00.813 "copy": true, 00:17:00.813 "nvme_iov_md": false 00:17:00.813 }, 00:17:00.813 "memory_domains": [ 00:17:00.813 { 00:17:00.813 "dma_device_id": "system", 00:17:00.813 "dma_device_type": 1 00:17:00.813 }, 00:17:00.813 { 00:17:00.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.813 "dma_device_type": 2 00:17:00.813 } 00:17:00.813 ], 00:17:00.813 "driver_specific": {} 00:17:00.813 }' 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:00.813 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.071 "name": "BaseBdev3", 00:17:01.071 "aliases": [ 00:17:01.071 "02650a18-48bd-11ef-a06c-59ddad71024c" 00:17:01.071 ], 00:17:01.071 "product_name": "Malloc disk", 00:17:01.071 "block_size": 512, 00:17:01.071 "num_blocks": 65536, 00:17:01.071 "uuid": "02650a18-48bd-11ef-a06c-59ddad71024c", 00:17:01.071 "assigned_rate_limits": { 00:17:01.071 "rw_ios_per_sec": 0, 00:17:01.071 "rw_mbytes_per_sec": 0, 00:17:01.071 "r_mbytes_per_sec": 0, 00:17:01.071 "w_mbytes_per_sec": 0 00:17:01.071 }, 00:17:01.071 "claimed": true, 00:17:01.071 "claim_type": "exclusive_write", 00:17:01.071 "zoned": false, 00:17:01.071 "supported_io_types": { 00:17:01.071 "read": true, 00:17:01.071 "write": true, 00:17:01.071 "unmap": true, 00:17:01.071 "flush": true, 00:17:01.071 "reset": true, 00:17:01.071 "nvme_admin": false, 00:17:01.071 "nvme_io": false, 00:17:01.071 "nvme_io_md": false, 00:17:01.071 "write_zeroes": true, 00:17:01.071 "zcopy": true, 00:17:01.071 "get_zone_info": false, 00:17:01.071 "zone_management": false, 00:17:01.071 "zone_append": false, 00:17:01.071 "compare": false, 00:17:01.071 "compare_and_write": false, 00:17:01.071 "abort": true, 00:17:01.071 "seek_hole": false, 00:17:01.071 "seek_data": false, 00:17:01.071 "copy": true, 00:17:01.071 "nvme_iov_md": false 00:17:01.071 }, 00:17:01.071 "memory_domains": [ 00:17:01.071 { 00:17:01.071 "dma_device_id": "system", 00:17:01.071 "dma_device_type": 1 00:17:01.071 }, 00:17:01.071 { 00:17:01.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.071 "dma_device_type": 2 00:17:01.071 } 00:17:01.071 ], 00:17:01.071 "driver_specific": {} 00:17:01.071 }' 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.071 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.072 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.072 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.072 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.072 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:01.072 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.383 "name": "BaseBdev4", 00:17:01.383 "aliases": [ 00:17:01.383 "03299637-48bd-11ef-a06c-59ddad71024c" 00:17:01.383 ], 00:17:01.383 "product_name": "Malloc disk", 00:17:01.383 "block_size": 512, 00:17:01.383 "num_blocks": 65536, 00:17:01.383 "uuid": "03299637-48bd-11ef-a06c-59ddad71024c", 00:17:01.383 "assigned_rate_limits": { 00:17:01.383 "rw_ios_per_sec": 0, 00:17:01.383 "rw_mbytes_per_sec": 0, 00:17:01.383 "r_mbytes_per_sec": 0, 00:17:01.383 "w_mbytes_per_sec": 0 00:17:01.383 }, 00:17:01.383 "claimed": true, 00:17:01.383 "claim_type": "exclusive_write", 00:17:01.383 "zoned": false, 00:17:01.383 "supported_io_types": { 00:17:01.383 "read": true, 00:17:01.383 "write": true, 00:17:01.383 "unmap": true, 00:17:01.383 "flush": true, 00:17:01.383 "reset": true, 00:17:01.383 "nvme_admin": false, 00:17:01.383 "nvme_io": false, 00:17:01.383 "nvme_io_md": false, 00:17:01.383 "write_zeroes": true, 00:17:01.383 "zcopy": true, 00:17:01.383 "get_zone_info": false, 00:17:01.383 "zone_management": false, 00:17:01.383 "zone_append": false, 00:17:01.383 "compare": false, 00:17:01.383 "compare_and_write": false, 00:17:01.383 "abort": true, 00:17:01.383 "seek_hole": false, 00:17:01.383 "seek_data": false, 00:17:01.383 "copy": true, 00:17:01.383 "nvme_iov_md": false 00:17:01.383 }, 00:17:01.383 "memory_domains": [ 00:17:01.383 { 00:17:01.383 "dma_device_id": "system", 00:17:01.383 "dma_device_type": 1 00:17:01.383 }, 00:17:01.383 { 00:17:01.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.383 "dma_device_type": 2 00:17:01.383 } 00:17:01.383 ], 00:17:01.383 "driver_specific": {} 00:17:01.383 }' 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.383 06:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:01.643 [2024-07-23 06:30:14.037250] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.643 [2024-07-23 06:30:14.037275] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.643 [2024-07-23 06:30:14.037306] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.643 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.900 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.900 "name": "Existed_Raid", 00:17:01.900 "uuid": "00fb1690-48bd-11ef-a06c-59ddad71024c", 00:17:01.900 "strip_size_kb": 64, 00:17:01.900 "state": "offline", 00:17:01.900 "raid_level": "concat", 00:17:01.900 "superblock": true, 00:17:01.900 "num_base_bdevs": 4, 00:17:01.900 "num_base_bdevs_discovered": 3, 00:17:01.900 "num_base_bdevs_operational": 3, 00:17:01.900 "base_bdevs_list": [ 00:17:01.900 { 00:17:01.900 "name": null, 00:17:01.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.900 "is_configured": false, 00:17:01.900 "data_offset": 2048, 00:17:01.900 "data_size": 63488 00:17:01.900 }, 00:17:01.900 { 00:17:01.900 "name": "BaseBdev2", 00:17:01.900 "uuid": "01894de4-48bd-11ef-a06c-59ddad71024c", 00:17:01.900 "is_configured": true, 00:17:01.900 "data_offset": 2048, 00:17:01.900 "data_size": 63488 00:17:01.900 }, 00:17:01.900 { 00:17:01.900 "name": "BaseBdev3", 00:17:01.900 "uuid": "02650a18-48bd-11ef-a06c-59ddad71024c", 00:17:01.900 "is_configured": true, 00:17:01.900 "data_offset": 2048, 00:17:01.900 "data_size": 63488 00:17:01.900 }, 00:17:01.900 { 00:17:01.900 "name": "BaseBdev4", 00:17:01.900 "uuid": "03299637-48bd-11ef-a06c-59ddad71024c", 00:17:01.900 "is_configured": true, 00:17:01.900 "data_offset": 2048, 00:17:01.900 "data_size": 63488 00:17:01.900 } 00:17:01.900 ] 00:17:01.900 }' 00:17:01.900 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.900 06:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.158 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:02.158 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:02.158 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.158 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:02.416 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:02.416 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.416 06:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:02.674 [2024-07-23 06:30:15.111469] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:02.674 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:02.674 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:02.674 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:02.674 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.932 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:02.932 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.932 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:03.190 [2024-07-23 06:30:15.621653] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:03.190 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:03.190 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:03.190 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.190 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:03.448 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:03.448 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.448 06:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:03.706 [2024-07-23 06:30:16.095522] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:03.706 [2024-07-23 06:30:16.095555] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aaa00034a00 name Existed_Raid, state offline 00:17:03.706 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:03.706 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:03.706 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.706 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.964 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:03.964 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:03.964 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:03.964 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:03.964 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:03.964 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.222 BaseBdev2 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.222 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.481 06:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:04.740 [ 00:17:04.740 { 00:17:04.740 "name": "BaseBdev2", 00:17:04.740 "aliases": [ 00:17:04.740 "066fa6b0-48bd-11ef-a06c-59ddad71024c" 00:17:04.740 ], 00:17:04.740 "product_name": "Malloc disk", 00:17:04.740 "block_size": 512, 00:17:04.740 "num_blocks": 65536, 00:17:04.740 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:04.740 "assigned_rate_limits": { 00:17:04.740 "rw_ios_per_sec": 0, 00:17:04.740 "rw_mbytes_per_sec": 0, 00:17:04.740 "r_mbytes_per_sec": 0, 00:17:04.740 "w_mbytes_per_sec": 0 00:17:04.740 }, 00:17:04.740 "claimed": false, 00:17:04.740 "zoned": false, 00:17:04.740 "supported_io_types": { 00:17:04.740 "read": true, 00:17:04.740 "write": true, 00:17:04.740 "unmap": true, 00:17:04.740 "flush": true, 00:17:04.740 "reset": true, 00:17:04.740 "nvme_admin": false, 00:17:04.740 "nvme_io": false, 00:17:04.740 "nvme_io_md": false, 00:17:04.740 "write_zeroes": true, 00:17:04.740 "zcopy": true, 00:17:04.740 "get_zone_info": false, 00:17:04.740 "zone_management": false, 00:17:04.740 "zone_append": false, 00:17:04.740 "compare": false, 00:17:04.740 "compare_and_write": false, 00:17:04.740 "abort": true, 00:17:04.740 "seek_hole": false, 00:17:04.740 "seek_data": false, 00:17:04.740 "copy": true, 00:17:04.740 "nvme_iov_md": false 00:17:04.740 }, 00:17:04.740 "memory_domains": [ 00:17:04.740 { 00:17:04.740 "dma_device_id": "system", 00:17:04.740 "dma_device_type": 1 00:17:04.740 }, 00:17:04.740 { 00:17:04.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.740 "dma_device_type": 2 00:17:04.740 } 00:17:04.740 ], 00:17:04.740 "driver_specific": {} 00:17:04.740 } 00:17:04.740 ] 00:17:04.740 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:04.740 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:04.740 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:04.740 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:04.999 BaseBdev3 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.999 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.257 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.544 [ 00:17:05.544 { 00:17:05.544 "name": "BaseBdev3", 00:17:05.544 "aliases": [ 00:17:05.544 "06e1ccf0-48bd-11ef-a06c-59ddad71024c" 00:17:05.544 ], 00:17:05.544 "product_name": "Malloc disk", 00:17:05.544 "block_size": 512, 00:17:05.544 "num_blocks": 65536, 00:17:05.544 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:05.544 "assigned_rate_limits": { 00:17:05.544 "rw_ios_per_sec": 0, 00:17:05.544 "rw_mbytes_per_sec": 0, 00:17:05.544 "r_mbytes_per_sec": 0, 00:17:05.544 "w_mbytes_per_sec": 0 00:17:05.544 }, 00:17:05.544 "claimed": false, 00:17:05.544 "zoned": false, 00:17:05.544 "supported_io_types": { 00:17:05.544 "read": true, 00:17:05.544 "write": true, 00:17:05.544 "unmap": true, 00:17:05.544 "flush": true, 00:17:05.544 "reset": true, 00:17:05.544 "nvme_admin": false, 00:17:05.544 "nvme_io": false, 00:17:05.544 "nvme_io_md": false, 00:17:05.544 "write_zeroes": true, 00:17:05.544 "zcopy": true, 00:17:05.544 "get_zone_info": false, 00:17:05.544 "zone_management": false, 00:17:05.544 "zone_append": false, 00:17:05.544 "compare": false, 00:17:05.544 "compare_and_write": false, 00:17:05.544 "abort": true, 00:17:05.544 "seek_hole": false, 00:17:05.544 "seek_data": false, 00:17:05.544 "copy": true, 00:17:05.544 "nvme_iov_md": false 00:17:05.544 }, 00:17:05.544 "memory_domains": [ 00:17:05.544 { 00:17:05.544 "dma_device_id": "system", 00:17:05.544 "dma_device_type": 1 00:17:05.544 }, 00:17:05.544 { 00:17:05.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.544 "dma_device_type": 2 00:17:05.544 } 00:17:05.544 ], 00:17:05.544 "driver_specific": {} 00:17:05.544 } 00:17:05.544 ] 00:17:05.544 06:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:05.544 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:05.544 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:05.544 06:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:05.838 BaseBdev4 00:17:05.838 06:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:05.838 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:05.838 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:05.838 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:05.838 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:05.838 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:05.839 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.097 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:06.356 [ 00:17:06.356 { 00:17:06.356 "name": "BaseBdev4", 00:17:06.356 "aliases": [ 00:17:06.356 "07727593-48bd-11ef-a06c-59ddad71024c" 00:17:06.356 ], 00:17:06.356 "product_name": "Malloc disk", 00:17:06.356 "block_size": 512, 00:17:06.356 "num_blocks": 65536, 00:17:06.356 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:06.356 "assigned_rate_limits": { 00:17:06.356 "rw_ios_per_sec": 0, 00:17:06.356 "rw_mbytes_per_sec": 0, 00:17:06.356 "r_mbytes_per_sec": 0, 00:17:06.356 "w_mbytes_per_sec": 0 00:17:06.356 }, 00:17:06.356 "claimed": false, 00:17:06.356 "zoned": false, 00:17:06.356 "supported_io_types": { 00:17:06.356 "read": true, 00:17:06.356 "write": true, 00:17:06.356 "unmap": true, 00:17:06.356 "flush": true, 00:17:06.356 "reset": true, 00:17:06.356 "nvme_admin": false, 00:17:06.356 "nvme_io": false, 00:17:06.356 "nvme_io_md": false, 00:17:06.356 "write_zeroes": true, 00:17:06.356 "zcopy": true, 00:17:06.356 "get_zone_info": false, 00:17:06.356 "zone_management": false, 00:17:06.356 "zone_append": false, 00:17:06.356 "compare": false, 00:17:06.356 "compare_and_write": false, 00:17:06.356 "abort": true, 00:17:06.356 "seek_hole": false, 00:17:06.356 "seek_data": false, 00:17:06.356 "copy": true, 00:17:06.356 "nvme_iov_md": false 00:17:06.356 }, 00:17:06.356 "memory_domains": [ 00:17:06.356 { 00:17:06.356 "dma_device_id": "system", 00:17:06.356 "dma_device_type": 1 00:17:06.356 }, 00:17:06.356 { 00:17:06.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.356 "dma_device_type": 2 00:17:06.356 } 00:17:06.356 ], 00:17:06.356 "driver_specific": {} 00:17:06.356 } 00:17:06.356 ] 00:17:06.614 06:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:06.614 06:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:06.614 06:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:06.614 06:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:06.873 [2024-07-23 06:30:19.177535] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.873 [2024-07-23 06:30:19.177590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.873 [2024-07-23 06:30:19.177600] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.873 [2024-07-23 06:30:19.178164] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.873 [2024-07-23 06:30:19.178175] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.873 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.132 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.132 "name": "Existed_Raid", 00:17:07.132 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:07.132 "strip_size_kb": 64, 00:17:07.132 "state": "configuring", 00:17:07.132 "raid_level": "concat", 00:17:07.132 "superblock": true, 00:17:07.132 "num_base_bdevs": 4, 00:17:07.132 "num_base_bdevs_discovered": 3, 00:17:07.132 "num_base_bdevs_operational": 4, 00:17:07.132 "base_bdevs_list": [ 00:17:07.132 { 00:17:07.132 "name": "BaseBdev1", 00:17:07.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.132 "is_configured": false, 00:17:07.132 "data_offset": 0, 00:17:07.132 "data_size": 0 00:17:07.132 }, 00:17:07.132 { 00:17:07.132 "name": "BaseBdev2", 00:17:07.132 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:07.132 "is_configured": true, 00:17:07.132 "data_offset": 2048, 00:17:07.132 "data_size": 63488 00:17:07.132 }, 00:17:07.132 { 00:17:07.132 "name": "BaseBdev3", 00:17:07.132 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:07.132 "is_configured": true, 00:17:07.132 "data_offset": 2048, 00:17:07.132 "data_size": 63488 00:17:07.132 }, 00:17:07.132 { 00:17:07.132 "name": "BaseBdev4", 00:17:07.132 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:07.132 "is_configured": true, 00:17:07.132 "data_offset": 2048, 00:17:07.132 "data_size": 63488 00:17:07.132 } 00:17:07.132 ] 00:17:07.132 }' 00:17:07.132 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.132 06:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.391 06:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:07.649 [2024-07-23 06:30:20.149575] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.649 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.216 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.217 "name": "Existed_Raid", 00:17:08.217 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:08.217 "strip_size_kb": 64, 00:17:08.217 "state": "configuring", 00:17:08.217 "raid_level": "concat", 00:17:08.217 "superblock": true, 00:17:08.217 "num_base_bdevs": 4, 00:17:08.217 "num_base_bdevs_discovered": 2, 00:17:08.217 "num_base_bdevs_operational": 4, 00:17:08.217 "base_bdevs_list": [ 00:17:08.217 { 00:17:08.217 "name": "BaseBdev1", 00:17:08.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.217 "is_configured": false, 00:17:08.217 "data_offset": 0, 00:17:08.217 "data_size": 0 00:17:08.217 }, 00:17:08.217 { 00:17:08.217 "name": null, 00:17:08.217 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:08.217 "is_configured": false, 00:17:08.217 "data_offset": 2048, 00:17:08.217 "data_size": 63488 00:17:08.217 }, 00:17:08.217 { 00:17:08.217 "name": "BaseBdev3", 00:17:08.217 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:08.217 "is_configured": true, 00:17:08.217 "data_offset": 2048, 00:17:08.217 "data_size": 63488 00:17:08.217 }, 00:17:08.217 { 00:17:08.217 "name": "BaseBdev4", 00:17:08.217 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:08.217 "is_configured": true, 00:17:08.217 "data_offset": 2048, 00:17:08.217 "data_size": 63488 00:17:08.217 } 00:17:08.217 ] 00:17:08.217 }' 00:17:08.217 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.217 06:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.475 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:08.475 06:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.733 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:08.733 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.991 [2024-07-23 06:30:21.273819] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.991 BaseBdev1 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:08.991 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.249 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.507 [ 00:17:09.507 { 00:17:09.507 "name": "BaseBdev1", 00:17:09.507 "aliases": [ 00:17:09.507 "093c3efa-48bd-11ef-a06c-59ddad71024c" 00:17:09.507 ], 00:17:09.507 "product_name": "Malloc disk", 00:17:09.507 "block_size": 512, 00:17:09.507 "num_blocks": 65536, 00:17:09.507 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:09.507 "assigned_rate_limits": { 00:17:09.507 "rw_ios_per_sec": 0, 00:17:09.507 "rw_mbytes_per_sec": 0, 00:17:09.507 "r_mbytes_per_sec": 0, 00:17:09.507 "w_mbytes_per_sec": 0 00:17:09.507 }, 00:17:09.507 "claimed": true, 00:17:09.507 "claim_type": "exclusive_write", 00:17:09.507 "zoned": false, 00:17:09.507 "supported_io_types": { 00:17:09.507 "read": true, 00:17:09.507 "write": true, 00:17:09.507 "unmap": true, 00:17:09.507 "flush": true, 00:17:09.507 "reset": true, 00:17:09.507 "nvme_admin": false, 00:17:09.507 "nvme_io": false, 00:17:09.507 "nvme_io_md": false, 00:17:09.507 "write_zeroes": true, 00:17:09.507 "zcopy": true, 00:17:09.507 "get_zone_info": false, 00:17:09.507 "zone_management": false, 00:17:09.507 "zone_append": false, 00:17:09.507 "compare": false, 00:17:09.507 "compare_and_write": false, 00:17:09.507 "abort": true, 00:17:09.507 "seek_hole": false, 00:17:09.507 "seek_data": false, 00:17:09.507 "copy": true, 00:17:09.507 "nvme_iov_md": false 00:17:09.507 }, 00:17:09.507 "memory_domains": [ 00:17:09.507 { 00:17:09.507 "dma_device_id": "system", 00:17:09.507 "dma_device_type": 1 00:17:09.507 }, 00:17:09.507 { 00:17:09.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.507 "dma_device_type": 2 00:17:09.507 } 00:17:09.507 ], 00:17:09.507 "driver_specific": {} 00:17:09.507 } 00:17:09.507 ] 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.507 06:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.765 06:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.765 "name": "Existed_Raid", 00:17:09.765 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:09.765 "strip_size_kb": 64, 00:17:09.765 "state": "configuring", 00:17:09.765 "raid_level": "concat", 00:17:09.765 "superblock": true, 00:17:09.765 "num_base_bdevs": 4, 00:17:09.765 "num_base_bdevs_discovered": 3, 00:17:09.765 "num_base_bdevs_operational": 4, 00:17:09.765 "base_bdevs_list": [ 00:17:09.765 { 00:17:09.765 "name": "BaseBdev1", 00:17:09.765 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:09.765 "is_configured": true, 00:17:09.765 "data_offset": 2048, 00:17:09.765 "data_size": 63488 00:17:09.765 }, 00:17:09.765 { 00:17:09.765 "name": null, 00:17:09.765 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:09.765 "is_configured": false, 00:17:09.765 "data_offset": 2048, 00:17:09.765 "data_size": 63488 00:17:09.765 }, 00:17:09.765 { 00:17:09.765 "name": "BaseBdev3", 00:17:09.765 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:09.765 "is_configured": true, 00:17:09.765 "data_offset": 2048, 00:17:09.765 "data_size": 63488 00:17:09.765 }, 00:17:09.765 { 00:17:09.765 "name": "BaseBdev4", 00:17:09.765 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:09.765 "is_configured": true, 00:17:09.765 "data_offset": 2048, 00:17:09.765 "data_size": 63488 00:17:09.765 } 00:17:09.765 ] 00:17:09.765 }' 00:17:09.765 06:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.765 06:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.061 06:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.061 06:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:10.341 06:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:10.341 06:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:10.599 [2024-07-23 06:30:23.073794] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.599 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.858 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.858 "name": "Existed_Raid", 00:17:10.858 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:10.858 "strip_size_kb": 64, 00:17:10.858 "state": "configuring", 00:17:10.858 "raid_level": "concat", 00:17:10.858 "superblock": true, 00:17:10.858 "num_base_bdevs": 4, 00:17:10.858 "num_base_bdevs_discovered": 2, 00:17:10.858 "num_base_bdevs_operational": 4, 00:17:10.858 "base_bdevs_list": [ 00:17:10.858 { 00:17:10.858 "name": "BaseBdev1", 00:17:10.858 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:10.858 "is_configured": true, 00:17:10.858 "data_offset": 2048, 00:17:10.858 "data_size": 63488 00:17:10.858 }, 00:17:10.858 { 00:17:10.858 "name": null, 00:17:10.858 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:10.858 "is_configured": false, 00:17:10.858 "data_offset": 2048, 00:17:10.858 "data_size": 63488 00:17:10.858 }, 00:17:10.858 { 00:17:10.858 "name": null, 00:17:10.858 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:10.858 "is_configured": false, 00:17:10.858 "data_offset": 2048, 00:17:10.858 "data_size": 63488 00:17:10.858 }, 00:17:10.858 { 00:17:10.858 "name": "BaseBdev4", 00:17:10.858 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:10.858 "is_configured": true, 00:17:10.858 "data_offset": 2048, 00:17:10.858 "data_size": 63488 00:17:10.858 } 00:17:10.858 ] 00:17:10.858 }' 00:17:10.858 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.858 06:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.425 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.425 06:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:11.682 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:11.682 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:11.940 [2024-07-23 06:30:24.313824] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.940 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.197 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.197 "name": "Existed_Raid", 00:17:12.197 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:12.197 "strip_size_kb": 64, 00:17:12.197 "state": "configuring", 00:17:12.197 "raid_level": "concat", 00:17:12.197 "superblock": true, 00:17:12.197 "num_base_bdevs": 4, 00:17:12.197 "num_base_bdevs_discovered": 3, 00:17:12.197 "num_base_bdevs_operational": 4, 00:17:12.197 "base_bdevs_list": [ 00:17:12.197 { 00:17:12.197 "name": "BaseBdev1", 00:17:12.197 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:12.197 "is_configured": true, 00:17:12.197 "data_offset": 2048, 00:17:12.197 "data_size": 63488 00:17:12.197 }, 00:17:12.197 { 00:17:12.197 "name": null, 00:17:12.197 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:12.197 "is_configured": false, 00:17:12.197 "data_offset": 2048, 00:17:12.197 "data_size": 63488 00:17:12.197 }, 00:17:12.197 { 00:17:12.197 "name": "BaseBdev3", 00:17:12.197 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:12.197 "is_configured": true, 00:17:12.197 "data_offset": 2048, 00:17:12.197 "data_size": 63488 00:17:12.197 }, 00:17:12.197 { 00:17:12.197 "name": "BaseBdev4", 00:17:12.197 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:12.197 "is_configured": true, 00:17:12.197 "data_offset": 2048, 00:17:12.197 "data_size": 63488 00:17:12.197 } 00:17:12.197 ] 00:17:12.197 }' 00:17:12.197 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.197 06:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.470 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.470 06:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.035 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:13.035 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:13.035 [2024-07-23 06:30:25.549904] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.293 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.551 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.551 "name": "Existed_Raid", 00:17:13.551 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:13.551 "strip_size_kb": 64, 00:17:13.551 "state": "configuring", 00:17:13.551 "raid_level": "concat", 00:17:13.551 "superblock": true, 00:17:13.551 "num_base_bdevs": 4, 00:17:13.551 "num_base_bdevs_discovered": 2, 00:17:13.551 "num_base_bdevs_operational": 4, 00:17:13.551 "base_bdevs_list": [ 00:17:13.551 { 00:17:13.551 "name": null, 00:17:13.551 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:13.551 "is_configured": false, 00:17:13.551 "data_offset": 2048, 00:17:13.551 "data_size": 63488 00:17:13.551 }, 00:17:13.551 { 00:17:13.551 "name": null, 00:17:13.551 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:13.551 "is_configured": false, 00:17:13.551 "data_offset": 2048, 00:17:13.551 "data_size": 63488 00:17:13.551 }, 00:17:13.551 { 00:17:13.551 "name": "BaseBdev3", 00:17:13.551 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:13.551 "is_configured": true, 00:17:13.551 "data_offset": 2048, 00:17:13.551 "data_size": 63488 00:17:13.551 }, 00:17:13.551 { 00:17:13.551 "name": "BaseBdev4", 00:17:13.551 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:13.551 "is_configured": true, 00:17:13.551 "data_offset": 2048, 00:17:13.551 "data_size": 63488 00:17:13.551 } 00:17:13.551 ] 00:17:13.551 }' 00:17:13.551 06:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.551 06:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.809 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.809 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.067 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:14.067 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:14.325 [2024-07-23 06:30:26.704148] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.326 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.584 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.584 "name": "Existed_Raid", 00:17:14.584 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:14.584 "strip_size_kb": 64, 00:17:14.584 "state": "configuring", 00:17:14.584 "raid_level": "concat", 00:17:14.584 "superblock": true, 00:17:14.584 "num_base_bdevs": 4, 00:17:14.584 "num_base_bdevs_discovered": 3, 00:17:14.584 "num_base_bdevs_operational": 4, 00:17:14.584 "base_bdevs_list": [ 00:17:14.584 { 00:17:14.584 "name": null, 00:17:14.584 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:14.584 "is_configured": false, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 }, 00:17:14.584 { 00:17:14.584 "name": "BaseBdev2", 00:17:14.584 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:14.584 "is_configured": true, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 }, 00:17:14.584 { 00:17:14.584 "name": "BaseBdev3", 00:17:14.584 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:14.584 "is_configured": true, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 }, 00:17:14.584 { 00:17:14.584 "name": "BaseBdev4", 00:17:14.584 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:14.584 "is_configured": true, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 } 00:17:14.584 ] 00:17:14.584 }' 00:17:14.584 06:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.584 06:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.844 06:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.844 06:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.103 06:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:15.103 06:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.103 06:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:15.670 06:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 093c3efa-48bd-11ef-a06c-59ddad71024c 00:17:15.670 [2024-07-23 06:30:28.136319] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:15.670 [2024-07-23 06:30:28.136376] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2aaa00034f00 00:17:15.670 [2024-07-23 06:30:28.136382] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:15.670 [2024-07-23 06:30:28.136404] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2aaa00097e20 00:17:15.670 [2024-07-23 06:30:28.136452] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2aaa00034f00 00:17:15.670 [2024-07-23 06:30:28.136457] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2aaa00034f00 00:17:15.670 [2024-07-23 06:30:28.136476] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.670 NewBaseBdev 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:15.670 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.929 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:16.186 [ 00:17:16.186 { 00:17:16.186 "name": "NewBaseBdev", 00:17:16.186 "aliases": [ 00:17:16.186 "093c3efa-48bd-11ef-a06c-59ddad71024c" 00:17:16.186 ], 00:17:16.186 "product_name": "Malloc disk", 00:17:16.186 "block_size": 512, 00:17:16.186 "num_blocks": 65536, 00:17:16.186 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:16.186 "assigned_rate_limits": { 00:17:16.186 "rw_ios_per_sec": 0, 00:17:16.186 "rw_mbytes_per_sec": 0, 00:17:16.186 "r_mbytes_per_sec": 0, 00:17:16.186 "w_mbytes_per_sec": 0 00:17:16.186 }, 00:17:16.186 "claimed": true, 00:17:16.186 "claim_type": "exclusive_write", 00:17:16.186 "zoned": false, 00:17:16.186 "supported_io_types": { 00:17:16.186 "read": true, 00:17:16.186 "write": true, 00:17:16.186 "unmap": true, 00:17:16.186 "flush": true, 00:17:16.186 "reset": true, 00:17:16.186 "nvme_admin": false, 00:17:16.186 "nvme_io": false, 00:17:16.186 "nvme_io_md": false, 00:17:16.186 "write_zeroes": true, 00:17:16.186 "zcopy": true, 00:17:16.186 "get_zone_info": false, 00:17:16.186 "zone_management": false, 00:17:16.186 "zone_append": false, 00:17:16.186 "compare": false, 00:17:16.186 "compare_and_write": false, 00:17:16.186 "abort": true, 00:17:16.186 "seek_hole": false, 00:17:16.186 "seek_data": false, 00:17:16.186 "copy": true, 00:17:16.186 "nvme_iov_md": false 00:17:16.186 }, 00:17:16.186 "memory_domains": [ 00:17:16.186 { 00:17:16.186 "dma_device_id": "system", 00:17:16.186 "dma_device_type": 1 00:17:16.186 }, 00:17:16.186 { 00:17:16.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.186 "dma_device_type": 2 00:17:16.186 } 00:17:16.186 ], 00:17:16.186 "driver_specific": {} 00:17:16.186 } 00:17:16.186 ] 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.186 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.752 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.752 "name": "Existed_Raid", 00:17:16.752 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:16.752 "strip_size_kb": 64, 00:17:16.752 "state": "online", 00:17:16.752 "raid_level": "concat", 00:17:16.752 "superblock": true, 00:17:16.752 "num_base_bdevs": 4, 00:17:16.752 "num_base_bdevs_discovered": 4, 00:17:16.752 "num_base_bdevs_operational": 4, 00:17:16.752 "base_bdevs_list": [ 00:17:16.752 { 00:17:16.752 "name": "NewBaseBdev", 00:17:16.752 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:16.752 "is_configured": true, 00:17:16.752 "data_offset": 2048, 00:17:16.752 "data_size": 63488 00:17:16.752 }, 00:17:16.752 { 00:17:16.752 "name": "BaseBdev2", 00:17:16.752 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:16.752 "is_configured": true, 00:17:16.752 "data_offset": 2048, 00:17:16.752 "data_size": 63488 00:17:16.752 }, 00:17:16.752 { 00:17:16.752 "name": "BaseBdev3", 00:17:16.752 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:16.752 "is_configured": true, 00:17:16.752 "data_offset": 2048, 00:17:16.752 "data_size": 63488 00:17:16.752 }, 00:17:16.752 { 00:17:16.752 "name": "BaseBdev4", 00:17:16.752 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:16.752 "is_configured": true, 00:17:16.752 "data_offset": 2048, 00:17:16.752 "data_size": 63488 00:17:16.752 } 00:17:16.752 ] 00:17:16.752 }' 00:17:16.752 06:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.752 06:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:16.753 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:17.023 [2024-07-23 06:30:29.484256] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.023 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:17.023 "name": "Existed_Raid", 00:17:17.023 "aliases": [ 00:17:17.023 "07fc6633-48bd-11ef-a06c-59ddad71024c" 00:17:17.023 ], 00:17:17.023 "product_name": "Raid Volume", 00:17:17.023 "block_size": 512, 00:17:17.023 "num_blocks": 253952, 00:17:17.023 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:17.023 "assigned_rate_limits": { 00:17:17.023 "rw_ios_per_sec": 0, 00:17:17.023 "rw_mbytes_per_sec": 0, 00:17:17.023 "r_mbytes_per_sec": 0, 00:17:17.023 "w_mbytes_per_sec": 0 00:17:17.023 }, 00:17:17.023 "claimed": false, 00:17:17.023 "zoned": false, 00:17:17.023 "supported_io_types": { 00:17:17.023 "read": true, 00:17:17.023 "write": true, 00:17:17.023 "unmap": true, 00:17:17.023 "flush": true, 00:17:17.023 "reset": true, 00:17:17.023 "nvme_admin": false, 00:17:17.023 "nvme_io": false, 00:17:17.023 "nvme_io_md": false, 00:17:17.023 "write_zeroes": true, 00:17:17.023 "zcopy": false, 00:17:17.023 "get_zone_info": false, 00:17:17.023 "zone_management": false, 00:17:17.023 "zone_append": false, 00:17:17.023 "compare": false, 00:17:17.023 "compare_and_write": false, 00:17:17.023 "abort": false, 00:17:17.023 "seek_hole": false, 00:17:17.023 "seek_data": false, 00:17:17.023 "copy": false, 00:17:17.023 "nvme_iov_md": false 00:17:17.023 }, 00:17:17.023 "memory_domains": [ 00:17:17.023 { 00:17:17.023 "dma_device_id": "system", 00:17:17.023 "dma_device_type": 1 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.023 "dma_device_type": 2 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "system", 00:17:17.023 "dma_device_type": 1 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.023 "dma_device_type": 2 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "system", 00:17:17.023 "dma_device_type": 1 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.023 "dma_device_type": 2 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "system", 00:17:17.023 "dma_device_type": 1 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.023 "dma_device_type": 2 00:17:17.023 } 00:17:17.023 ], 00:17:17.023 "driver_specific": { 00:17:17.023 "raid": { 00:17:17.023 "uuid": "07fc6633-48bd-11ef-a06c-59ddad71024c", 00:17:17.023 "strip_size_kb": 64, 00:17:17.023 "state": "online", 00:17:17.023 "raid_level": "concat", 00:17:17.023 "superblock": true, 00:17:17.023 "num_base_bdevs": 4, 00:17:17.023 "num_base_bdevs_discovered": 4, 00:17:17.023 "num_base_bdevs_operational": 4, 00:17:17.023 "base_bdevs_list": [ 00:17:17.023 { 00:17:17.023 "name": "NewBaseBdev", 00:17:17.023 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:17.023 "is_configured": true, 00:17:17.023 "data_offset": 2048, 00:17:17.023 "data_size": 63488 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "name": "BaseBdev2", 00:17:17.023 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:17.023 "is_configured": true, 00:17:17.023 "data_offset": 2048, 00:17:17.023 "data_size": 63488 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "name": "BaseBdev3", 00:17:17.023 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:17.023 "is_configured": true, 00:17:17.023 "data_offset": 2048, 00:17:17.023 "data_size": 63488 00:17:17.023 }, 00:17:17.023 { 00:17:17.023 "name": "BaseBdev4", 00:17:17.023 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:17.023 "is_configured": true, 00:17:17.023 "data_offset": 2048, 00:17:17.023 "data_size": 63488 00:17:17.023 } 00:17:17.023 ] 00:17:17.023 } 00:17:17.023 } 00:17:17.023 }' 00:17:17.023 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.023 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:17.023 BaseBdev2 00:17:17.023 BaseBdev3 00:17:17.023 BaseBdev4' 00:17:17.023 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.023 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:17.023 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.590 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.590 "name": "NewBaseBdev", 00:17:17.590 "aliases": [ 00:17:17.590 "093c3efa-48bd-11ef-a06c-59ddad71024c" 00:17:17.590 ], 00:17:17.590 "product_name": "Malloc disk", 00:17:17.590 "block_size": 512, 00:17:17.590 "num_blocks": 65536, 00:17:17.590 "uuid": "093c3efa-48bd-11ef-a06c-59ddad71024c", 00:17:17.590 "assigned_rate_limits": { 00:17:17.590 "rw_ios_per_sec": 0, 00:17:17.590 "rw_mbytes_per_sec": 0, 00:17:17.590 "r_mbytes_per_sec": 0, 00:17:17.590 "w_mbytes_per_sec": 0 00:17:17.590 }, 00:17:17.590 "claimed": true, 00:17:17.590 "claim_type": "exclusive_write", 00:17:17.590 "zoned": false, 00:17:17.590 "supported_io_types": { 00:17:17.590 "read": true, 00:17:17.590 "write": true, 00:17:17.590 "unmap": true, 00:17:17.590 "flush": true, 00:17:17.590 "reset": true, 00:17:17.590 "nvme_admin": false, 00:17:17.590 "nvme_io": false, 00:17:17.590 "nvme_io_md": false, 00:17:17.590 "write_zeroes": true, 00:17:17.590 "zcopy": true, 00:17:17.590 "get_zone_info": false, 00:17:17.590 "zone_management": false, 00:17:17.590 "zone_append": false, 00:17:17.590 "compare": false, 00:17:17.590 "compare_and_write": false, 00:17:17.590 "abort": true, 00:17:17.590 "seek_hole": false, 00:17:17.590 "seek_data": false, 00:17:17.590 "copy": true, 00:17:17.590 "nvme_iov_md": false 00:17:17.590 }, 00:17:17.590 "memory_domains": [ 00:17:17.590 { 00:17:17.590 "dma_device_id": "system", 00:17:17.590 "dma_device_type": 1 00:17:17.590 }, 00:17:17.590 { 00:17:17.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.591 "dma_device_type": 2 00:17:17.591 } 00:17:17.591 ], 00:17:17.591 "driver_specific": {} 00:17:17.591 }' 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:17.591 06:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.849 "name": "BaseBdev2", 00:17:17.849 "aliases": [ 00:17:17.849 "066fa6b0-48bd-11ef-a06c-59ddad71024c" 00:17:17.849 ], 00:17:17.849 "product_name": "Malloc disk", 00:17:17.849 "block_size": 512, 00:17:17.849 "num_blocks": 65536, 00:17:17.849 "uuid": "066fa6b0-48bd-11ef-a06c-59ddad71024c", 00:17:17.849 "assigned_rate_limits": { 00:17:17.849 "rw_ios_per_sec": 0, 00:17:17.849 "rw_mbytes_per_sec": 0, 00:17:17.849 "r_mbytes_per_sec": 0, 00:17:17.849 "w_mbytes_per_sec": 0 00:17:17.849 }, 00:17:17.849 "claimed": true, 00:17:17.849 "claim_type": "exclusive_write", 00:17:17.849 "zoned": false, 00:17:17.849 "supported_io_types": { 00:17:17.849 "read": true, 00:17:17.849 "write": true, 00:17:17.849 "unmap": true, 00:17:17.849 "flush": true, 00:17:17.849 "reset": true, 00:17:17.849 "nvme_admin": false, 00:17:17.849 "nvme_io": false, 00:17:17.849 "nvme_io_md": false, 00:17:17.849 "write_zeroes": true, 00:17:17.849 "zcopy": true, 00:17:17.849 "get_zone_info": false, 00:17:17.849 "zone_management": false, 00:17:17.849 "zone_append": false, 00:17:17.849 "compare": false, 00:17:17.849 "compare_and_write": false, 00:17:17.849 "abort": true, 00:17:17.849 "seek_hole": false, 00:17:17.849 "seek_data": false, 00:17:17.849 "copy": true, 00:17:17.849 "nvme_iov_md": false 00:17:17.849 }, 00:17:17.849 "memory_domains": [ 00:17:17.849 { 00:17:17.849 "dma_device_id": "system", 00:17:17.849 "dma_device_type": 1 00:17:17.849 }, 00:17:17.849 { 00:17:17.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.849 "dma_device_type": 2 00:17:17.849 } 00:17:17.849 ], 00:17:17.849 "driver_specific": {} 00:17:17.849 }' 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:17.849 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.107 "name": "BaseBdev3", 00:17:18.107 "aliases": [ 00:17:18.107 "06e1ccf0-48bd-11ef-a06c-59ddad71024c" 00:17:18.107 ], 00:17:18.107 "product_name": "Malloc disk", 00:17:18.107 "block_size": 512, 00:17:18.107 "num_blocks": 65536, 00:17:18.107 "uuid": "06e1ccf0-48bd-11ef-a06c-59ddad71024c", 00:17:18.107 "assigned_rate_limits": { 00:17:18.107 "rw_ios_per_sec": 0, 00:17:18.107 "rw_mbytes_per_sec": 0, 00:17:18.107 "r_mbytes_per_sec": 0, 00:17:18.107 "w_mbytes_per_sec": 0 00:17:18.107 }, 00:17:18.107 "claimed": true, 00:17:18.107 "claim_type": "exclusive_write", 00:17:18.107 "zoned": false, 00:17:18.107 "supported_io_types": { 00:17:18.107 "read": true, 00:17:18.107 "write": true, 00:17:18.107 "unmap": true, 00:17:18.107 "flush": true, 00:17:18.107 "reset": true, 00:17:18.107 "nvme_admin": false, 00:17:18.107 "nvme_io": false, 00:17:18.107 "nvme_io_md": false, 00:17:18.107 "write_zeroes": true, 00:17:18.107 "zcopy": true, 00:17:18.107 "get_zone_info": false, 00:17:18.107 "zone_management": false, 00:17:18.107 "zone_append": false, 00:17:18.107 "compare": false, 00:17:18.107 "compare_and_write": false, 00:17:18.107 "abort": true, 00:17:18.107 "seek_hole": false, 00:17:18.107 "seek_data": false, 00:17:18.107 "copy": true, 00:17:18.107 "nvme_iov_md": false 00:17:18.107 }, 00:17:18.107 "memory_domains": [ 00:17:18.107 { 00:17:18.107 "dma_device_id": "system", 00:17:18.107 "dma_device_type": 1 00:17:18.107 }, 00:17:18.107 { 00:17:18.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.107 "dma_device_type": 2 00:17:18.107 } 00:17:18.107 ], 00:17:18.107 "driver_specific": {} 00:17:18.107 }' 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.107 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.108 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.108 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.108 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:18.108 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.366 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.366 "name": "BaseBdev4", 00:17:18.366 "aliases": [ 00:17:18.366 "07727593-48bd-11ef-a06c-59ddad71024c" 00:17:18.366 ], 00:17:18.366 "product_name": "Malloc disk", 00:17:18.366 "block_size": 512, 00:17:18.366 "num_blocks": 65536, 00:17:18.366 "uuid": "07727593-48bd-11ef-a06c-59ddad71024c", 00:17:18.366 "assigned_rate_limits": { 00:17:18.366 "rw_ios_per_sec": 0, 00:17:18.366 "rw_mbytes_per_sec": 0, 00:17:18.366 "r_mbytes_per_sec": 0, 00:17:18.366 "w_mbytes_per_sec": 0 00:17:18.366 }, 00:17:18.366 "claimed": true, 00:17:18.366 "claim_type": "exclusive_write", 00:17:18.366 "zoned": false, 00:17:18.366 "supported_io_types": { 00:17:18.366 "read": true, 00:17:18.366 "write": true, 00:17:18.366 "unmap": true, 00:17:18.366 "flush": true, 00:17:18.366 "reset": true, 00:17:18.366 "nvme_admin": false, 00:17:18.366 "nvme_io": false, 00:17:18.366 "nvme_io_md": false, 00:17:18.366 "write_zeroes": true, 00:17:18.366 "zcopy": true, 00:17:18.366 "get_zone_info": false, 00:17:18.366 "zone_management": false, 00:17:18.366 "zone_append": false, 00:17:18.366 "compare": false, 00:17:18.366 "compare_and_write": false, 00:17:18.366 "abort": true, 00:17:18.366 "seek_hole": false, 00:17:18.366 "seek_data": false, 00:17:18.366 "copy": true, 00:17:18.366 "nvme_iov_md": false 00:17:18.366 }, 00:17:18.366 "memory_domains": [ 00:17:18.366 { 00:17:18.366 "dma_device_id": "system", 00:17:18.366 "dma_device_type": 1 00:17:18.366 }, 00:17:18.366 { 00:17:18.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.366 "dma_device_type": 2 00:17:18.366 } 00:17:18.366 ], 00:17:18.366 "driver_specific": {} 00:17:18.366 }' 00:17:18.366 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.625 06:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.883 [2024-07-23 06:30:31.224263] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.883 [2024-07-23 06:30:31.224321] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.883 [2024-07-23 06:30:31.224343] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.883 [2024-07-23 06:30:31.224359] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.883 [2024-07-23 06:30:31.224363] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aaa00034f00 name Existed_Raid, state offline 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61562 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61562 ']' 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61562 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61562 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:18.883 killing process with pid 61562 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61562' 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61562 00:17:18.883 [2024-07-23 06:30:31.256073] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.883 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61562 00:17:18.883 [2024-07-23 06:30:31.280475] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.148 06:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:19.148 00:17:19.148 real 0m28.745s 00:17:19.148 user 0m52.712s 00:17:19.148 sys 0m3.912s 00:17:19.148 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.148 06:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.148 ************************************ 00:17:19.148 END TEST raid_state_function_test_sb 00:17:19.148 ************************************ 00:17:19.148 06:30:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:19.148 06:30:31 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:19.148 06:30:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:19.148 06:30:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.148 06:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.148 ************************************ 00:17:19.148 START TEST raid_superblock_test 00:17:19.148 ************************************ 00:17:19.148 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62384 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62384 /var/tmp/spdk-raid.sock 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 62384 ']' 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.149 06:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.149 [2024-07-23 06:30:31.525381] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:19.149 [2024-07-23 06:30:31.525554] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:19.731 EAL: TSC is not safe to use in SMP mode 00:17:19.731 EAL: TSC is not invariant 00:17:19.731 [2024-07-23 06:30:32.064232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.731 [2024-07-23 06:30:32.154024] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:19.731 [2024-07-23 06:30:32.156148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.731 [2024-07-23 06:30:32.156912] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.731 [2024-07-23 06:30:32.156929] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.298 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:20.555 malloc1 00:17:20.555 06:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.813 [2024-07-23 06:30:33.101522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.813 [2024-07-23 06:30:33.101602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.813 [2024-07-23 06:30:33.101615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a34780 00:17:20.813 [2024-07-23 06:30:33.101624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.813 [2024-07-23 06:30:33.102538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.813 [2024-07-23 06:30:33.102563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.813 pt1 00:17:20.813 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:20.813 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.814 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:21.072 malloc2 00:17:21.072 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.331 [2024-07-23 06:30:33.661548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.331 [2024-07-23 06:30:33.661627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.331 [2024-07-23 06:30:33.661656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a34c80 00:17:21.331 [2024-07-23 06:30:33.661665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.331 [2024-07-23 06:30:33.662335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.331 [2024-07-23 06:30:33.662360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.331 pt2 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:21.331 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:21.590 malloc3 00:17:21.590 06:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.848 [2024-07-23 06:30:34.261577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.848 [2024-07-23 06:30:34.261632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.848 [2024-07-23 06:30:34.261645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a35180 00:17:21.848 [2024-07-23 06:30:34.261653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.848 [2024-07-23 06:30:34.262329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.848 [2024-07-23 06:30:34.262355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.848 pt3 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:21.848 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:22.106 malloc4 00:17:22.106 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:22.365 [2024-07-23 06:30:34.841586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:22.365 [2024-07-23 06:30:34.841644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.365 [2024-07-23 06:30:34.841657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a35680 00:17:22.365 [2024-07-23 06:30:34.841665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.365 [2024-07-23 06:30:34.842332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.365 [2024-07-23 06:30:34.842355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:22.365 pt4 00:17:22.365 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:22.365 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:22.365 06:30:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:22.624 [2024-07-23 06:30:35.141602] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.624 [2024-07-23 06:30:35.142184] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.624 [2024-07-23 06:30:35.142207] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:22.624 [2024-07-23 06:30:35.142218] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:22.624 [2024-07-23 06:30:35.142272] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x28e838a35900 00:17:22.624 [2024-07-23 06:30:35.142279] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:22.624 [2024-07-23 06:30:35.142313] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28e838a97e20 00:17:22.624 [2024-07-23 06:30:35.142388] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28e838a35900 00:17:22.624 [2024-07-23 06:30:35.142393] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28e838a35900 00:17:22.624 [2024-07-23 06:30:35.142420] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.882 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.140 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.140 "name": "raid_bdev1", 00:17:23.140 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:23.140 "strip_size_kb": 64, 00:17:23.141 "state": "online", 00:17:23.141 "raid_level": "concat", 00:17:23.141 "superblock": true, 00:17:23.141 "num_base_bdevs": 4, 00:17:23.141 "num_base_bdevs_discovered": 4, 00:17:23.141 "num_base_bdevs_operational": 4, 00:17:23.141 "base_bdevs_list": [ 00:17:23.141 { 00:17:23.141 "name": "pt1", 00:17:23.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.141 "is_configured": true, 00:17:23.141 "data_offset": 2048, 00:17:23.141 "data_size": 63488 00:17:23.141 }, 00:17:23.141 { 00:17:23.141 "name": "pt2", 00:17:23.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.141 "is_configured": true, 00:17:23.141 "data_offset": 2048, 00:17:23.141 "data_size": 63488 00:17:23.141 }, 00:17:23.141 { 00:17:23.141 "name": "pt3", 00:17:23.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.141 "is_configured": true, 00:17:23.141 "data_offset": 2048, 00:17:23.141 "data_size": 63488 00:17:23.141 }, 00:17:23.141 { 00:17:23.141 "name": "pt4", 00:17:23.141 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:23.141 "is_configured": true, 00:17:23.141 "data_offset": 2048, 00:17:23.141 "data_size": 63488 00:17:23.141 } 00:17:23.141 ] 00:17:23.141 }' 00:17:23.141 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.141 06:30:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.399 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:23.399 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:23.399 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:23.399 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:23.400 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:23.400 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:23.400 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:23.400 06:30:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:23.683 [2024-07-23 06:30:35.989648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.683 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:23.683 "name": "raid_bdev1", 00:17:23.683 "aliases": [ 00:17:23.683 "11805272-48bd-11ef-a06c-59ddad71024c" 00:17:23.683 ], 00:17:23.683 "product_name": "Raid Volume", 00:17:23.683 "block_size": 512, 00:17:23.683 "num_blocks": 253952, 00:17:23.683 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:23.683 "assigned_rate_limits": { 00:17:23.683 "rw_ios_per_sec": 0, 00:17:23.683 "rw_mbytes_per_sec": 0, 00:17:23.683 "r_mbytes_per_sec": 0, 00:17:23.683 "w_mbytes_per_sec": 0 00:17:23.683 }, 00:17:23.683 "claimed": false, 00:17:23.683 "zoned": false, 00:17:23.683 "supported_io_types": { 00:17:23.683 "read": true, 00:17:23.683 "write": true, 00:17:23.683 "unmap": true, 00:17:23.683 "flush": true, 00:17:23.683 "reset": true, 00:17:23.683 "nvme_admin": false, 00:17:23.683 "nvme_io": false, 00:17:23.683 "nvme_io_md": false, 00:17:23.683 "write_zeroes": true, 00:17:23.683 "zcopy": false, 00:17:23.683 "get_zone_info": false, 00:17:23.683 "zone_management": false, 00:17:23.683 "zone_append": false, 00:17:23.683 "compare": false, 00:17:23.683 "compare_and_write": false, 00:17:23.683 "abort": false, 00:17:23.683 "seek_hole": false, 00:17:23.684 "seek_data": false, 00:17:23.684 "copy": false, 00:17:23.684 "nvme_iov_md": false 00:17:23.684 }, 00:17:23.684 "memory_domains": [ 00:17:23.684 { 00:17:23.684 "dma_device_id": "system", 00:17:23.684 "dma_device_type": 1 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.684 "dma_device_type": 2 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "system", 00:17:23.684 "dma_device_type": 1 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.684 "dma_device_type": 2 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "system", 00:17:23.684 "dma_device_type": 1 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.684 "dma_device_type": 2 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "system", 00:17:23.684 "dma_device_type": 1 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.684 "dma_device_type": 2 00:17:23.684 } 00:17:23.684 ], 00:17:23.684 "driver_specific": { 00:17:23.684 "raid": { 00:17:23.684 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:23.684 "strip_size_kb": 64, 00:17:23.684 "state": "online", 00:17:23.684 "raid_level": "concat", 00:17:23.684 "superblock": true, 00:17:23.684 "num_base_bdevs": 4, 00:17:23.684 "num_base_bdevs_discovered": 4, 00:17:23.684 "num_base_bdevs_operational": 4, 00:17:23.684 "base_bdevs_list": [ 00:17:23.684 { 00:17:23.684 "name": "pt1", 00:17:23.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.684 "is_configured": true, 00:17:23.684 "data_offset": 2048, 00:17:23.684 "data_size": 63488 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "name": "pt2", 00:17:23.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.684 "is_configured": true, 00:17:23.684 "data_offset": 2048, 00:17:23.684 "data_size": 63488 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "name": "pt3", 00:17:23.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.684 "is_configured": true, 00:17:23.684 "data_offset": 2048, 00:17:23.684 "data_size": 63488 00:17:23.684 }, 00:17:23.684 { 00:17:23.684 "name": "pt4", 00:17:23.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:23.684 "is_configured": true, 00:17:23.684 "data_offset": 2048, 00:17:23.684 "data_size": 63488 00:17:23.684 } 00:17:23.684 ] 00:17:23.684 } 00:17:23.684 } 00:17:23.684 }' 00:17:23.684 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:23.684 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:23.684 pt2 00:17:23.684 pt3 00:17:23.684 pt4' 00:17:23.684 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.684 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:23.684 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:23.942 "name": "pt1", 00:17:23.942 "aliases": [ 00:17:23.942 "00000000-0000-0000-0000-000000000001" 00:17:23.942 ], 00:17:23.942 "product_name": "passthru", 00:17:23.942 "block_size": 512, 00:17:23.942 "num_blocks": 65536, 00:17:23.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.942 "assigned_rate_limits": { 00:17:23.942 "rw_ios_per_sec": 0, 00:17:23.942 "rw_mbytes_per_sec": 0, 00:17:23.942 "r_mbytes_per_sec": 0, 00:17:23.942 "w_mbytes_per_sec": 0 00:17:23.942 }, 00:17:23.942 "claimed": true, 00:17:23.942 "claim_type": "exclusive_write", 00:17:23.942 "zoned": false, 00:17:23.942 "supported_io_types": { 00:17:23.942 "read": true, 00:17:23.942 "write": true, 00:17:23.942 "unmap": true, 00:17:23.942 "flush": true, 00:17:23.942 "reset": true, 00:17:23.942 "nvme_admin": false, 00:17:23.942 "nvme_io": false, 00:17:23.942 "nvme_io_md": false, 00:17:23.942 "write_zeroes": true, 00:17:23.942 "zcopy": true, 00:17:23.942 "get_zone_info": false, 00:17:23.942 "zone_management": false, 00:17:23.942 "zone_append": false, 00:17:23.942 "compare": false, 00:17:23.942 "compare_and_write": false, 00:17:23.942 "abort": true, 00:17:23.942 "seek_hole": false, 00:17:23.942 "seek_data": false, 00:17:23.942 "copy": true, 00:17:23.942 "nvme_iov_md": false 00:17:23.942 }, 00:17:23.942 "memory_domains": [ 00:17:23.942 { 00:17:23.942 "dma_device_id": "system", 00:17:23.942 "dma_device_type": 1 00:17:23.942 }, 00:17:23.942 { 00:17:23.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.942 "dma_device_type": 2 00:17:23.942 } 00:17:23.942 ], 00:17:23.942 "driver_specific": { 00:17:23.942 "passthru": { 00:17:23.942 "name": "pt1", 00:17:23.942 "base_bdev_name": "malloc1" 00:17:23.942 } 00:17:23.942 } 00:17:23.942 }' 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:23.942 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.200 "name": "pt2", 00:17:24.200 "aliases": [ 00:17:24.200 "00000000-0000-0000-0000-000000000002" 00:17:24.200 ], 00:17:24.200 "product_name": "passthru", 00:17:24.200 "block_size": 512, 00:17:24.200 "num_blocks": 65536, 00:17:24.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.200 "assigned_rate_limits": { 00:17:24.200 "rw_ios_per_sec": 0, 00:17:24.200 "rw_mbytes_per_sec": 0, 00:17:24.200 "r_mbytes_per_sec": 0, 00:17:24.200 "w_mbytes_per_sec": 0 00:17:24.200 }, 00:17:24.200 "claimed": true, 00:17:24.200 "claim_type": "exclusive_write", 00:17:24.200 "zoned": false, 00:17:24.200 "supported_io_types": { 00:17:24.200 "read": true, 00:17:24.200 "write": true, 00:17:24.200 "unmap": true, 00:17:24.200 "flush": true, 00:17:24.200 "reset": true, 00:17:24.200 "nvme_admin": false, 00:17:24.200 "nvme_io": false, 00:17:24.200 "nvme_io_md": false, 00:17:24.200 "write_zeroes": true, 00:17:24.200 "zcopy": true, 00:17:24.200 "get_zone_info": false, 00:17:24.200 "zone_management": false, 00:17:24.200 "zone_append": false, 00:17:24.200 "compare": false, 00:17:24.200 "compare_and_write": false, 00:17:24.200 "abort": true, 00:17:24.200 "seek_hole": false, 00:17:24.200 "seek_data": false, 00:17:24.200 "copy": true, 00:17:24.200 "nvme_iov_md": false 00:17:24.200 }, 00:17:24.200 "memory_domains": [ 00:17:24.200 { 00:17:24.200 "dma_device_id": "system", 00:17:24.200 "dma_device_type": 1 00:17:24.200 }, 00:17:24.200 { 00:17:24.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.200 "dma_device_type": 2 00:17:24.200 } 00:17:24.200 ], 00:17:24.200 "driver_specific": { 00:17:24.200 "passthru": { 00:17:24.200 "name": "pt2", 00:17:24.200 "base_bdev_name": "malloc2" 00:17:24.200 } 00:17:24.200 } 00:17:24.200 }' 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:24.200 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.458 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.458 "name": "pt3", 00:17:24.458 "aliases": [ 00:17:24.459 "00000000-0000-0000-0000-000000000003" 00:17:24.459 ], 00:17:24.459 "product_name": "passthru", 00:17:24.459 "block_size": 512, 00:17:24.459 "num_blocks": 65536, 00:17:24.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.459 "assigned_rate_limits": { 00:17:24.459 "rw_ios_per_sec": 0, 00:17:24.459 "rw_mbytes_per_sec": 0, 00:17:24.459 "r_mbytes_per_sec": 0, 00:17:24.459 "w_mbytes_per_sec": 0 00:17:24.459 }, 00:17:24.459 "claimed": true, 00:17:24.459 "claim_type": "exclusive_write", 00:17:24.459 "zoned": false, 00:17:24.459 "supported_io_types": { 00:17:24.459 "read": true, 00:17:24.459 "write": true, 00:17:24.459 "unmap": true, 00:17:24.459 "flush": true, 00:17:24.459 "reset": true, 00:17:24.459 "nvme_admin": false, 00:17:24.459 "nvme_io": false, 00:17:24.459 "nvme_io_md": false, 00:17:24.459 "write_zeroes": true, 00:17:24.459 "zcopy": true, 00:17:24.459 "get_zone_info": false, 00:17:24.459 "zone_management": false, 00:17:24.459 "zone_append": false, 00:17:24.459 "compare": false, 00:17:24.459 "compare_and_write": false, 00:17:24.459 "abort": true, 00:17:24.459 "seek_hole": false, 00:17:24.459 "seek_data": false, 00:17:24.459 "copy": true, 00:17:24.459 "nvme_iov_md": false 00:17:24.459 }, 00:17:24.459 "memory_domains": [ 00:17:24.459 { 00:17:24.459 "dma_device_id": "system", 00:17:24.459 "dma_device_type": 1 00:17:24.459 }, 00:17:24.459 { 00:17:24.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.459 "dma_device_type": 2 00:17:24.459 } 00:17:24.459 ], 00:17:24.459 "driver_specific": { 00:17:24.459 "passthru": { 00:17:24.459 "name": "pt3", 00:17:24.459 "base_bdev_name": "malloc3" 00:17:24.459 } 00:17:24.459 } 00:17:24.459 }' 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:24.459 06:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.717 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.717 "name": "pt4", 00:17:24.717 "aliases": [ 00:17:24.717 "00000000-0000-0000-0000-000000000004" 00:17:24.717 ], 00:17:24.717 "product_name": "passthru", 00:17:24.717 "block_size": 512, 00:17:24.717 "num_blocks": 65536, 00:17:24.717 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:24.717 "assigned_rate_limits": { 00:17:24.717 "rw_ios_per_sec": 0, 00:17:24.717 "rw_mbytes_per_sec": 0, 00:17:24.717 "r_mbytes_per_sec": 0, 00:17:24.717 "w_mbytes_per_sec": 0 00:17:24.717 }, 00:17:24.717 "claimed": true, 00:17:24.717 "claim_type": "exclusive_write", 00:17:24.717 "zoned": false, 00:17:24.717 "supported_io_types": { 00:17:24.717 "read": true, 00:17:24.717 "write": true, 00:17:24.717 "unmap": true, 00:17:24.717 "flush": true, 00:17:24.717 "reset": true, 00:17:24.717 "nvme_admin": false, 00:17:24.717 "nvme_io": false, 00:17:24.717 "nvme_io_md": false, 00:17:24.718 "write_zeroes": true, 00:17:24.718 "zcopy": true, 00:17:24.718 "get_zone_info": false, 00:17:24.718 "zone_management": false, 00:17:24.718 "zone_append": false, 00:17:24.718 "compare": false, 00:17:24.718 "compare_and_write": false, 00:17:24.718 "abort": true, 00:17:24.718 "seek_hole": false, 00:17:24.718 "seek_data": false, 00:17:24.718 "copy": true, 00:17:24.718 "nvme_iov_md": false 00:17:24.718 }, 00:17:24.718 "memory_domains": [ 00:17:24.718 { 00:17:24.718 "dma_device_id": "system", 00:17:24.718 "dma_device_type": 1 00:17:24.718 }, 00:17:24.718 { 00:17:24.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.718 "dma_device_type": 2 00:17:24.718 } 00:17:24.718 ], 00:17:24.718 "driver_specific": { 00:17:24.718 "passthru": { 00:17:24.718 "name": "pt4", 00:17:24.718 "base_bdev_name": "malloc4" 00:17:24.718 } 00:17:24.718 } 00:17:24.718 }' 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:24.718 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:24.976 [2024-07-23 06:30:37.477698] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.976 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=11805272-48bd-11ef-a06c-59ddad71024c 00:17:24.976 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 11805272-48bd-11ef-a06c-59ddad71024c ']' 00:17:24.976 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:25.542 [2024-07-23 06:30:37.781646] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.542 [2024-07-23 06:30:37.781672] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.542 [2024-07-23 06:30:37.781696] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.542 [2024-07-23 06:30:37.781713] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.542 [2024-07-23 06:30:37.781717] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28e838a35900 name raid_bdev1, state offline 00:17:25.542 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.542 06:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:25.542 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:25.542 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:25.542 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:25.542 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:26.112 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.112 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:26.370 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.370 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:26.628 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.628 06:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:26.886 06:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:26.886 06:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:27.145 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:27.403 [2024-07-23 06:30:39.837703] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:27.403 [2024-07-23 06:30:39.838293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:27.403 [2024-07-23 06:30:39.838313] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:27.403 [2024-07-23 06:30:39.838321] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:27.403 [2024-07-23 06:30:39.838336] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:27.403 [2024-07-23 06:30:39.838373] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:27.403 [2024-07-23 06:30:39.838396] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:27.403 [2024-07-23 06:30:39.838406] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:27.403 [2024-07-23 06:30:39.838414] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.403 [2024-07-23 06:30:39.838419] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28e838a35680 name raid_bdev1, state configuring 00:17:27.403 request: 00:17:27.403 { 00:17:27.403 "name": "raid_bdev1", 00:17:27.403 "raid_level": "concat", 00:17:27.403 "base_bdevs": [ 00:17:27.403 "malloc1", 00:17:27.403 "malloc2", 00:17:27.403 "malloc3", 00:17:27.403 "malloc4" 00:17:27.403 ], 00:17:27.403 "strip_size_kb": 64, 00:17:27.403 "superblock": false, 00:17:27.403 "method": "bdev_raid_create", 00:17:27.403 "req_id": 1 00:17:27.403 } 00:17:27.403 Got JSON-RPC error response 00:17:27.403 response: 00:17:27.403 { 00:17:27.403 "code": -17, 00:17:27.403 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:27.403 } 00:17:27.403 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:27.403 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.403 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.403 06:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.403 06:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.403 06:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:27.661 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:27.661 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:27.661 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.920 [2024-07-23 06:30:40.405709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.920 [2024-07-23 06:30:40.405784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.920 [2024-07-23 06:30:40.405803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a35180 00:17:27.920 [2024-07-23 06:30:40.405813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.920 [2024-07-23 06:30:40.406465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.920 [2024-07-23 06:30:40.406493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.920 [2024-07-23 06:30:40.406520] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:27.920 [2024-07-23 06:30:40.406532] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.920 pt1 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.920 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.178 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.178 "name": "raid_bdev1", 00:17:28.178 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:28.178 "strip_size_kb": 64, 00:17:28.178 "state": "configuring", 00:17:28.178 "raid_level": "concat", 00:17:28.178 "superblock": true, 00:17:28.178 "num_base_bdevs": 4, 00:17:28.178 "num_base_bdevs_discovered": 1, 00:17:28.178 "num_base_bdevs_operational": 4, 00:17:28.178 "base_bdevs_list": [ 00:17:28.178 { 00:17:28.178 "name": "pt1", 00:17:28.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.178 "is_configured": true, 00:17:28.178 "data_offset": 2048, 00:17:28.178 "data_size": 63488 00:17:28.178 }, 00:17:28.178 { 00:17:28.178 "name": null, 00:17:28.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.179 "is_configured": false, 00:17:28.179 "data_offset": 2048, 00:17:28.179 "data_size": 63488 00:17:28.179 }, 00:17:28.179 { 00:17:28.179 "name": null, 00:17:28.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:28.179 "is_configured": false, 00:17:28.179 "data_offset": 2048, 00:17:28.179 "data_size": 63488 00:17:28.179 }, 00:17:28.179 { 00:17:28.179 "name": null, 00:17:28.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:28.179 "is_configured": false, 00:17:28.179 "data_offset": 2048, 00:17:28.179 "data_size": 63488 00:17:28.179 } 00:17:28.179 ] 00:17:28.179 }' 00:17:28.179 06:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.179 06:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.743 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:17:28.743 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.001 [2024-07-23 06:30:41.321735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.001 [2024-07-23 06:30:41.321798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.001 [2024-07-23 06:30:41.321811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a34780 00:17:29.001 [2024-07-23 06:30:41.321819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.001 [2024-07-23 06:30:41.321946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.001 [2024-07-23 06:30:41.321958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.001 [2024-07-23 06:30:41.321981] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:29.001 [2024-07-23 06:30:41.321990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.001 pt2 00:17:29.001 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:29.260 [2024-07-23 06:30:41.629751] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.260 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.517 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.517 "name": "raid_bdev1", 00:17:29.517 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:29.517 "strip_size_kb": 64, 00:17:29.517 "state": "configuring", 00:17:29.517 "raid_level": "concat", 00:17:29.517 "superblock": true, 00:17:29.517 "num_base_bdevs": 4, 00:17:29.518 "num_base_bdevs_discovered": 1, 00:17:29.518 "num_base_bdevs_operational": 4, 00:17:29.518 "base_bdevs_list": [ 00:17:29.518 { 00:17:29.518 "name": "pt1", 00:17:29.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.518 "is_configured": true, 00:17:29.518 "data_offset": 2048, 00:17:29.518 "data_size": 63488 00:17:29.518 }, 00:17:29.518 { 00:17:29.518 "name": null, 00:17:29.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.518 "is_configured": false, 00:17:29.518 "data_offset": 2048, 00:17:29.518 "data_size": 63488 00:17:29.518 }, 00:17:29.518 { 00:17:29.518 "name": null, 00:17:29.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:29.518 "is_configured": false, 00:17:29.518 "data_offset": 2048, 00:17:29.518 "data_size": 63488 00:17:29.518 }, 00:17:29.518 { 00:17:29.518 "name": null, 00:17:29.518 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:29.518 "is_configured": false, 00:17:29.518 "data_offset": 2048, 00:17:29.518 "data_size": 63488 00:17:29.518 } 00:17:29.518 ] 00:17:29.518 }' 00:17:29.518 06:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.518 06:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.775 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:29.775 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:29.775 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.341 [2024-07-23 06:30:42.573764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.341 [2024-07-23 06:30:42.573826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.341 [2024-07-23 06:30:42.573839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a34780 00:17:30.341 [2024-07-23 06:30:42.573847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.341 [2024-07-23 06:30:42.573963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.341 [2024-07-23 06:30:42.573974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.341 [2024-07-23 06:30:42.573999] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.341 [2024-07-23 06:30:42.574007] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.341 pt2 00:17:30.341 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:30.341 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:30.341 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:30.599 [2024-07-23 06:30:42.873779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:30.599 [2024-07-23 06:30:42.873857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.599 [2024-07-23 06:30:42.873876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a35b80 00:17:30.599 [2024-07-23 06:30:42.873893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.599 [2024-07-23 06:30:42.874041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.599 [2024-07-23 06:30:42.874073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:30.599 [2024-07-23 06:30:42.874105] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:30.599 [2024-07-23 06:30:42.874121] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:30.599 pt3 00:17:30.599 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:30.599 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:30.599 06:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:30.857 [2024-07-23 06:30:43.169776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:30.857 [2024-07-23 06:30:43.169840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.857 [2024-07-23 06:30:43.169853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e838a35900 00:17:30.857 [2024-07-23 06:30:43.169862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.857 [2024-07-23 06:30:43.169975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.857 [2024-07-23 06:30:43.169986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:30.857 [2024-07-23 06:30:43.170009] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:30.857 [2024-07-23 06:30:43.170018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:30.857 [2024-07-23 06:30:43.170050] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x28e838a34c80 00:17:30.857 [2024-07-23 06:30:43.170055] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:30.857 [2024-07-23 06:30:43.170076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28e838a97e20 00:17:30.857 [2024-07-23 06:30:43.170131] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28e838a34c80 00:17:30.857 [2024-07-23 06:30:43.170136] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28e838a34c80 00:17:30.857 [2024-07-23 06:30:43.170158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.857 pt4 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.857 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.116 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.116 "name": "raid_bdev1", 00:17:31.116 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:31.116 "strip_size_kb": 64, 00:17:31.116 "state": "online", 00:17:31.116 "raid_level": "concat", 00:17:31.116 "superblock": true, 00:17:31.116 "num_base_bdevs": 4, 00:17:31.116 "num_base_bdevs_discovered": 4, 00:17:31.116 "num_base_bdevs_operational": 4, 00:17:31.116 "base_bdevs_list": [ 00:17:31.116 { 00:17:31.116 "name": "pt1", 00:17:31.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.116 "is_configured": true, 00:17:31.116 "data_offset": 2048, 00:17:31.116 "data_size": 63488 00:17:31.116 }, 00:17:31.116 { 00:17:31.116 "name": "pt2", 00:17:31.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.116 "is_configured": true, 00:17:31.116 "data_offset": 2048, 00:17:31.116 "data_size": 63488 00:17:31.116 }, 00:17:31.116 { 00:17:31.116 "name": "pt3", 00:17:31.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:31.116 "is_configured": true, 00:17:31.116 "data_offset": 2048, 00:17:31.116 "data_size": 63488 00:17:31.116 }, 00:17:31.116 { 00:17:31.116 "name": "pt4", 00:17:31.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:31.116 "is_configured": true, 00:17:31.116 "data_offset": 2048, 00:17:31.116 "data_size": 63488 00:17:31.116 } 00:17:31.116 ] 00:17:31.116 }' 00:17:31.116 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.116 06:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:31.374 06:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:31.632 [2024-07-23 06:30:44.113858] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.632 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:31.632 "name": "raid_bdev1", 00:17:31.632 "aliases": [ 00:17:31.632 "11805272-48bd-11ef-a06c-59ddad71024c" 00:17:31.632 ], 00:17:31.632 "product_name": "Raid Volume", 00:17:31.632 "block_size": 512, 00:17:31.632 "num_blocks": 253952, 00:17:31.632 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:31.632 "assigned_rate_limits": { 00:17:31.632 "rw_ios_per_sec": 0, 00:17:31.632 "rw_mbytes_per_sec": 0, 00:17:31.632 "r_mbytes_per_sec": 0, 00:17:31.632 "w_mbytes_per_sec": 0 00:17:31.632 }, 00:17:31.632 "claimed": false, 00:17:31.632 "zoned": false, 00:17:31.632 "supported_io_types": { 00:17:31.632 "read": true, 00:17:31.632 "write": true, 00:17:31.632 "unmap": true, 00:17:31.632 "flush": true, 00:17:31.632 "reset": true, 00:17:31.632 "nvme_admin": false, 00:17:31.632 "nvme_io": false, 00:17:31.632 "nvme_io_md": false, 00:17:31.632 "write_zeroes": true, 00:17:31.632 "zcopy": false, 00:17:31.632 "get_zone_info": false, 00:17:31.632 "zone_management": false, 00:17:31.632 "zone_append": false, 00:17:31.632 "compare": false, 00:17:31.632 "compare_and_write": false, 00:17:31.632 "abort": false, 00:17:31.632 "seek_hole": false, 00:17:31.632 "seek_data": false, 00:17:31.632 "copy": false, 00:17:31.632 "nvme_iov_md": false 00:17:31.632 }, 00:17:31.632 "memory_domains": [ 00:17:31.632 { 00:17:31.632 "dma_device_id": "system", 00:17:31.632 "dma_device_type": 1 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.632 "dma_device_type": 2 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "system", 00:17:31.632 "dma_device_type": 1 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.632 "dma_device_type": 2 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "system", 00:17:31.632 "dma_device_type": 1 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.632 "dma_device_type": 2 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "system", 00:17:31.632 "dma_device_type": 1 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.632 "dma_device_type": 2 00:17:31.632 } 00:17:31.632 ], 00:17:31.632 "driver_specific": { 00:17:31.632 "raid": { 00:17:31.632 "uuid": "11805272-48bd-11ef-a06c-59ddad71024c", 00:17:31.632 "strip_size_kb": 64, 00:17:31.632 "state": "online", 00:17:31.632 "raid_level": "concat", 00:17:31.632 "superblock": true, 00:17:31.632 "num_base_bdevs": 4, 00:17:31.632 "num_base_bdevs_discovered": 4, 00:17:31.632 "num_base_bdevs_operational": 4, 00:17:31.632 "base_bdevs_list": [ 00:17:31.632 { 00:17:31.632 "name": "pt1", 00:17:31.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.632 "is_configured": true, 00:17:31.632 "data_offset": 2048, 00:17:31.632 "data_size": 63488 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "name": "pt2", 00:17:31.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.632 "is_configured": true, 00:17:31.632 "data_offset": 2048, 00:17:31.632 "data_size": 63488 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "name": "pt3", 00:17:31.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:31.632 "is_configured": true, 00:17:31.632 "data_offset": 2048, 00:17:31.632 "data_size": 63488 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "name": "pt4", 00:17:31.632 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:31.632 "is_configured": true, 00:17:31.632 "data_offset": 2048, 00:17:31.632 "data_size": 63488 00:17:31.632 } 00:17:31.632 ] 00:17:31.632 } 00:17:31.632 } 00:17:31.632 }' 00:17:31.632 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.632 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:31.632 pt2 00:17:31.632 pt3 00:17:31.632 pt4' 00:17:31.632 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.632 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:31.632 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.198 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.198 "name": "pt1", 00:17:32.198 "aliases": [ 00:17:32.198 "00000000-0000-0000-0000-000000000001" 00:17:32.198 ], 00:17:32.198 "product_name": "passthru", 00:17:32.198 "block_size": 512, 00:17:32.198 "num_blocks": 65536, 00:17:32.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.199 "assigned_rate_limits": { 00:17:32.199 "rw_ios_per_sec": 0, 00:17:32.199 "rw_mbytes_per_sec": 0, 00:17:32.199 "r_mbytes_per_sec": 0, 00:17:32.199 "w_mbytes_per_sec": 0 00:17:32.199 }, 00:17:32.199 "claimed": true, 00:17:32.199 "claim_type": "exclusive_write", 00:17:32.199 "zoned": false, 00:17:32.199 "supported_io_types": { 00:17:32.199 "read": true, 00:17:32.199 "write": true, 00:17:32.199 "unmap": true, 00:17:32.199 "flush": true, 00:17:32.199 "reset": true, 00:17:32.199 "nvme_admin": false, 00:17:32.199 "nvme_io": false, 00:17:32.199 "nvme_io_md": false, 00:17:32.199 "write_zeroes": true, 00:17:32.199 "zcopy": true, 00:17:32.199 "get_zone_info": false, 00:17:32.199 "zone_management": false, 00:17:32.199 "zone_append": false, 00:17:32.199 "compare": false, 00:17:32.199 "compare_and_write": false, 00:17:32.199 "abort": true, 00:17:32.199 "seek_hole": false, 00:17:32.199 "seek_data": false, 00:17:32.199 "copy": true, 00:17:32.199 "nvme_iov_md": false 00:17:32.199 }, 00:17:32.199 "memory_domains": [ 00:17:32.199 { 00:17:32.199 "dma_device_id": "system", 00:17:32.199 "dma_device_type": 1 00:17:32.199 }, 00:17:32.199 { 00:17:32.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.199 "dma_device_type": 2 00:17:32.199 } 00:17:32.199 ], 00:17:32.199 "driver_specific": { 00:17:32.199 "passthru": { 00:17:32.199 "name": "pt1", 00:17:32.199 "base_bdev_name": "malloc1" 00:17:32.199 } 00:17:32.199 } 00:17:32.199 }' 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:32.199 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.493 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.493 "name": "pt2", 00:17:32.494 "aliases": [ 00:17:32.494 "00000000-0000-0000-0000-000000000002" 00:17:32.494 ], 00:17:32.494 "product_name": "passthru", 00:17:32.494 "block_size": 512, 00:17:32.494 "num_blocks": 65536, 00:17:32.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.494 "assigned_rate_limits": { 00:17:32.494 "rw_ios_per_sec": 0, 00:17:32.494 "rw_mbytes_per_sec": 0, 00:17:32.494 "r_mbytes_per_sec": 0, 00:17:32.494 "w_mbytes_per_sec": 0 00:17:32.494 }, 00:17:32.494 "claimed": true, 00:17:32.494 "claim_type": "exclusive_write", 00:17:32.494 "zoned": false, 00:17:32.494 "supported_io_types": { 00:17:32.494 "read": true, 00:17:32.494 "write": true, 00:17:32.494 "unmap": true, 00:17:32.494 "flush": true, 00:17:32.494 "reset": true, 00:17:32.494 "nvme_admin": false, 00:17:32.494 "nvme_io": false, 00:17:32.494 "nvme_io_md": false, 00:17:32.494 "write_zeroes": true, 00:17:32.494 "zcopy": true, 00:17:32.494 "get_zone_info": false, 00:17:32.494 "zone_management": false, 00:17:32.494 "zone_append": false, 00:17:32.494 "compare": false, 00:17:32.494 "compare_and_write": false, 00:17:32.494 "abort": true, 00:17:32.494 "seek_hole": false, 00:17:32.494 "seek_data": false, 00:17:32.494 "copy": true, 00:17:32.494 "nvme_iov_md": false 00:17:32.494 }, 00:17:32.494 "memory_domains": [ 00:17:32.494 { 00:17:32.494 "dma_device_id": "system", 00:17:32.494 "dma_device_type": 1 00:17:32.494 }, 00:17:32.494 { 00:17:32.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.494 "dma_device_type": 2 00:17:32.494 } 00:17:32.494 ], 00:17:32.494 "driver_specific": { 00:17:32.494 "passthru": { 00:17:32.494 "name": "pt2", 00:17:32.494 "base_bdev_name": "malloc2" 00:17:32.494 } 00:17:32.494 } 00:17:32.494 }' 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:32.494 06:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.763 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.763 "name": "pt3", 00:17:32.763 "aliases": [ 00:17:32.763 "00000000-0000-0000-0000-000000000003" 00:17:32.763 ], 00:17:32.763 "product_name": "passthru", 00:17:32.763 "block_size": 512, 00:17:32.763 "num_blocks": 65536, 00:17:32.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.763 "assigned_rate_limits": { 00:17:32.763 "rw_ios_per_sec": 0, 00:17:32.763 "rw_mbytes_per_sec": 0, 00:17:32.763 "r_mbytes_per_sec": 0, 00:17:32.763 "w_mbytes_per_sec": 0 00:17:32.763 }, 00:17:32.763 "claimed": true, 00:17:32.763 "claim_type": "exclusive_write", 00:17:32.763 "zoned": false, 00:17:32.763 "supported_io_types": { 00:17:32.763 "read": true, 00:17:32.763 "write": true, 00:17:32.763 "unmap": true, 00:17:32.763 "flush": true, 00:17:32.763 "reset": true, 00:17:32.763 "nvme_admin": false, 00:17:32.763 "nvme_io": false, 00:17:32.763 "nvme_io_md": false, 00:17:32.763 "write_zeroes": true, 00:17:32.763 "zcopy": true, 00:17:32.763 "get_zone_info": false, 00:17:32.763 "zone_management": false, 00:17:32.763 "zone_append": false, 00:17:32.763 "compare": false, 00:17:32.763 "compare_and_write": false, 00:17:32.763 "abort": true, 00:17:32.763 "seek_hole": false, 00:17:32.763 "seek_data": false, 00:17:32.763 "copy": true, 00:17:32.763 "nvme_iov_md": false 00:17:32.763 }, 00:17:32.763 "memory_domains": [ 00:17:32.763 { 00:17:32.764 "dma_device_id": "system", 00:17:32.764 "dma_device_type": 1 00:17:32.764 }, 00:17:32.764 { 00:17:32.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.764 "dma_device_type": 2 00:17:32.764 } 00:17:32.764 ], 00:17:32.764 "driver_specific": { 00:17:32.764 "passthru": { 00:17:32.764 "name": "pt3", 00:17:32.764 "base_bdev_name": "malloc3" 00:17:32.764 } 00:17:32.764 } 00:17:32.764 }' 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:32.764 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:33.022 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:33.022 "name": "pt4", 00:17:33.022 "aliases": [ 00:17:33.022 "00000000-0000-0000-0000-000000000004" 00:17:33.022 ], 00:17:33.022 "product_name": "passthru", 00:17:33.022 "block_size": 512, 00:17:33.022 "num_blocks": 65536, 00:17:33.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.022 "assigned_rate_limits": { 00:17:33.022 "rw_ios_per_sec": 0, 00:17:33.022 "rw_mbytes_per_sec": 0, 00:17:33.022 "r_mbytes_per_sec": 0, 00:17:33.022 "w_mbytes_per_sec": 0 00:17:33.022 }, 00:17:33.022 "claimed": true, 00:17:33.022 "claim_type": "exclusive_write", 00:17:33.022 "zoned": false, 00:17:33.022 "supported_io_types": { 00:17:33.022 "read": true, 00:17:33.022 "write": true, 00:17:33.022 "unmap": true, 00:17:33.022 "flush": true, 00:17:33.022 "reset": true, 00:17:33.022 "nvme_admin": false, 00:17:33.022 "nvme_io": false, 00:17:33.022 "nvme_io_md": false, 00:17:33.022 "write_zeroes": true, 00:17:33.022 "zcopy": true, 00:17:33.022 "get_zone_info": false, 00:17:33.022 "zone_management": false, 00:17:33.022 "zone_append": false, 00:17:33.022 "compare": false, 00:17:33.022 "compare_and_write": false, 00:17:33.022 "abort": true, 00:17:33.022 "seek_hole": false, 00:17:33.022 "seek_data": false, 00:17:33.022 "copy": true, 00:17:33.022 "nvme_iov_md": false 00:17:33.022 }, 00:17:33.022 "memory_domains": [ 00:17:33.022 { 00:17:33.022 "dma_device_id": "system", 00:17:33.022 "dma_device_type": 1 00:17:33.022 }, 00:17:33.022 { 00:17:33.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.022 "dma_device_type": 2 00:17:33.022 } 00:17:33.022 ], 00:17:33.022 "driver_specific": { 00:17:33.022 "passthru": { 00:17:33.022 "name": "pt4", 00:17:33.022 "base_bdev_name": "malloc4" 00:17:33.022 } 00:17:33.022 } 00:17:33.022 }' 00:17:33.022 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.022 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.022 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:33.022 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.022 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:33.281 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:33.539 [2024-07-23 06:30:45.853901] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 11805272-48bd-11ef-a06c-59ddad71024c '!=' 11805272-48bd-11ef-a06c-59ddad71024c ']' 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62384 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 62384 ']' 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 62384 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 62384 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:33.539 killing process with pid 62384 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62384' 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 62384 00:17:33.539 [2024-07-23 06:30:45.884742] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.539 [2024-07-23 06:30:45.884783] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.539 06:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 62384 00:17:33.539 [2024-07-23 06:30:45.884800] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.539 [2024-07-23 06:30:45.884805] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28e838a34c80 name raid_bdev1, state offline 00:17:33.539 [2024-07-23 06:30:45.908314] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.797 06:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:33.797 00:17:33.797 real 0m14.572s 00:17:33.797 user 0m26.319s 00:17:33.797 sys 0m1.985s 00:17:33.797 ************************************ 00:17:33.797 END TEST raid_superblock_test 00:17:33.797 ************************************ 00:17:33.797 06:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.797 06:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.797 06:30:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:33.797 06:30:46 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:17:33.797 06:30:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:33.797 06:30:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.797 06:30:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.797 ************************************ 00:17:33.797 START TEST raid_read_error_test 00:17:33.797 ************************************ 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KwvtzVAxng 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62789 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62789 /var/tmp/spdk-raid.sock 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62789 ']' 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:33.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.797 06:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.797 [2024-07-23 06:30:46.147306] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:33.797 [2024-07-23 06:30:46.147517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:34.363 EAL: TSC is not safe to use in SMP mode 00:17:34.363 EAL: TSC is not invariant 00:17:34.363 [2024-07-23 06:30:46.700301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.363 [2024-07-23 06:30:46.798944] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:34.363 [2024-07-23 06:30:46.801473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.363 [2024-07-23 06:30:46.802446] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.363 [2024-07-23 06:30:46.802463] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.931 06:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.931 06:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:34.931 06:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:34.931 06:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:35.189 BaseBdev1_malloc 00:17:35.189 06:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:35.448 true 00:17:35.448 06:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:35.707 [2024-07-23 06:30:48.044007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:35.707 [2024-07-23 06:30:48.044092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.707 [2024-07-23 06:30:48.044120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30f608434780 00:17:35.707 [2024-07-23 06:30:48.044129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.707 [2024-07-23 06:30:48.044773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.707 [2024-07-23 06:30:48.044800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.707 BaseBdev1 00:17:35.707 06:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:35.707 06:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:35.966 BaseBdev2_malloc 00:17:35.966 06:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:36.225 true 00:17:36.225 06:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:36.506 [2024-07-23 06:30:48.776067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:36.506 [2024-07-23 06:30:48.776160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.506 [2024-07-23 06:30:48.776218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30f608434c80 00:17:36.506 [2024-07-23 06:30:48.776227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.506 [2024-07-23 06:30:48.776907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.506 [2024-07-23 06:30:48.776932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.506 BaseBdev2 00:17:36.506 06:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:36.506 06:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:36.506 BaseBdev3_malloc 00:17:36.765 06:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:36.765 true 00:17:36.765 06:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:37.023 [2024-07-23 06:30:49.480113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:37.023 [2024-07-23 06:30:49.480185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.023 [2024-07-23 06:30:49.480226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30f608435180 00:17:37.023 [2024-07-23 06:30:49.480234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.023 [2024-07-23 06:30:49.480896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.023 [2024-07-23 06:30:49.480920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:37.023 BaseBdev3 00:17:37.023 06:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:37.023 06:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:37.282 BaseBdev4_malloc 00:17:37.282 06:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:37.540 true 00:17:37.540 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:37.798 [2024-07-23 06:30:50.260172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:37.798 [2024-07-23 06:30:50.260250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.798 [2024-07-23 06:30:50.260292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30f608435680 00:17:37.798 [2024-07-23 06:30:50.260300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.798 [2024-07-23 06:30:50.260988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.798 [2024-07-23 06:30:50.261027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:37.798 BaseBdev4 00:17:37.798 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:38.056 [2024-07-23 06:30:50.496232] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.056 [2024-07-23 06:30:50.496870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.056 [2024-07-23 06:30:50.496894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.056 [2024-07-23 06:30:50.496909] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:38.056 [2024-07-23 06:30:50.496990] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x30f608435900 00:17:38.056 [2024-07-23 06:30:50.497022] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:38.056 [2024-07-23 06:30:50.497065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30f6084a0e20 00:17:38.056 [2024-07-23 06:30:50.497141] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30f608435900 00:17:38.056 [2024-07-23 06:30:50.497146] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x30f608435900 00:17:38.056 [2024-07-23 06:30:50.497173] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.056 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.314 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.314 "name": "raid_bdev1", 00:17:38.314 "uuid": "1aa740be-48bd-11ef-a06c-59ddad71024c", 00:17:38.314 "strip_size_kb": 64, 00:17:38.314 "state": "online", 00:17:38.314 "raid_level": "concat", 00:17:38.314 "superblock": true, 00:17:38.314 "num_base_bdevs": 4, 00:17:38.314 "num_base_bdevs_discovered": 4, 00:17:38.314 "num_base_bdevs_operational": 4, 00:17:38.314 "base_bdevs_list": [ 00:17:38.314 { 00:17:38.314 "name": "BaseBdev1", 00:17:38.314 "uuid": "aca5cf4d-ca8b-fd54-b71c-fe4f92e93dc1", 00:17:38.314 "is_configured": true, 00:17:38.314 "data_offset": 2048, 00:17:38.314 "data_size": 63488 00:17:38.314 }, 00:17:38.314 { 00:17:38.314 "name": "BaseBdev2", 00:17:38.314 "uuid": "8eab1714-b2e0-db50-aec8-68cb3810bf8e", 00:17:38.314 "is_configured": true, 00:17:38.314 "data_offset": 2048, 00:17:38.314 "data_size": 63488 00:17:38.314 }, 00:17:38.314 { 00:17:38.314 "name": "BaseBdev3", 00:17:38.314 "uuid": "e56d7ba4-c35d-cb5e-927b-036efdf101df", 00:17:38.314 "is_configured": true, 00:17:38.314 "data_offset": 2048, 00:17:38.314 "data_size": 63488 00:17:38.314 }, 00:17:38.314 { 00:17:38.314 "name": "BaseBdev4", 00:17:38.314 "uuid": "9f1e989b-229d-a157-b0ec-684b849536fe", 00:17:38.314 "is_configured": true, 00:17:38.314 "data_offset": 2048, 00:17:38.314 "data_size": 63488 00:17:38.314 } 00:17:38.314 ] 00:17:38.314 }' 00:17:38.315 06:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.315 06:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.882 06:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:38.882 06:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:38.882 [2024-07-23 06:30:51.244472] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30f6084a0ec0 00:17:39.817 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.076 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.334 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.334 "name": "raid_bdev1", 00:17:40.334 "uuid": "1aa740be-48bd-11ef-a06c-59ddad71024c", 00:17:40.334 "strip_size_kb": 64, 00:17:40.334 "state": "online", 00:17:40.334 "raid_level": "concat", 00:17:40.334 "superblock": true, 00:17:40.334 "num_base_bdevs": 4, 00:17:40.334 "num_base_bdevs_discovered": 4, 00:17:40.334 "num_base_bdevs_operational": 4, 00:17:40.334 "base_bdevs_list": [ 00:17:40.334 { 00:17:40.334 "name": "BaseBdev1", 00:17:40.334 "uuid": "aca5cf4d-ca8b-fd54-b71c-fe4f92e93dc1", 00:17:40.334 "is_configured": true, 00:17:40.334 "data_offset": 2048, 00:17:40.334 "data_size": 63488 00:17:40.334 }, 00:17:40.334 { 00:17:40.334 "name": "BaseBdev2", 00:17:40.334 "uuid": "8eab1714-b2e0-db50-aec8-68cb3810bf8e", 00:17:40.334 "is_configured": true, 00:17:40.334 "data_offset": 2048, 00:17:40.334 "data_size": 63488 00:17:40.334 }, 00:17:40.334 { 00:17:40.334 "name": "BaseBdev3", 00:17:40.334 "uuid": "e56d7ba4-c35d-cb5e-927b-036efdf101df", 00:17:40.334 "is_configured": true, 00:17:40.334 "data_offset": 2048, 00:17:40.334 "data_size": 63488 00:17:40.334 }, 00:17:40.334 { 00:17:40.334 "name": "BaseBdev4", 00:17:40.334 "uuid": "9f1e989b-229d-a157-b0ec-684b849536fe", 00:17:40.334 "is_configured": true, 00:17:40.334 "data_offset": 2048, 00:17:40.334 "data_size": 63488 00:17:40.334 } 00:17:40.334 ] 00:17:40.334 }' 00:17:40.334 06:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.334 06:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.593 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:40.852 [2024-07-23 06:30:53.319249] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.852 [2024-07-23 06:30:53.319278] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.852 [2024-07-23 06:30:53.319664] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.852 [2024-07-23 06:30:53.319675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.852 [2024-07-23 06:30:53.319684] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.852 [2024-07-23 06:30:53.319689] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30f608435900 name raid_bdev1, state offline 00:17:40.852 0 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62789 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62789 ']' 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62789 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62789 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:40.852 killing process with pid 62789 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62789' 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62789 00:17:40.852 [2024-07-23 06:30:53.348592] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.852 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62789 00:17:40.852 [2024-07-23 06:30:53.374834] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KwvtzVAxng 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:17:41.155 00:17:41.155 real 0m7.429s 00:17:41.155 user 0m11.967s 00:17:41.155 sys 0m1.164s 00:17:41.155 ************************************ 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:41.155 06:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.155 END TEST raid_read_error_test 00:17:41.155 ************************************ 00:17:41.155 06:30:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:41.155 06:30:53 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:41.155 06:30:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:41.155 06:30:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:41.155 06:30:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.155 ************************************ 00:17:41.155 START TEST raid_write_error_test 00:17:41.155 ************************************ 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3lMeIAoR86 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62927 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62927 /var/tmp/spdk-raid.sock 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62927 ']' 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.155 06:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.155 [2024-07-23 06:30:53.623532] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:41.155 [2024-07-23 06:30:53.623707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:41.721 EAL: TSC is not safe to use in SMP mode 00:17:41.721 EAL: TSC is not invariant 00:17:41.721 [2024-07-23 06:30:54.200368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.979 [2024-07-23 06:30:54.307232] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:41.979 [2024-07-23 06:30:54.309743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.979 [2024-07-23 06:30:54.310715] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.979 [2024-07-23 06:30:54.310735] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.237 06:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.237 06:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:42.237 06:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:42.237 06:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:42.495 BaseBdev1_malloc 00:17:42.495 06:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:43.062 true 00:17:43.062 06:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:43.062 [2024-07-23 06:30:55.568695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:43.062 [2024-07-23 06:30:55.568765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.062 [2024-07-23 06:30:55.568794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31797bc34780 00:17:43.062 [2024-07-23 06:30:55.568803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.062 [2024-07-23 06:30:55.569516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.062 [2024-07-23 06:30:55.569547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:43.062 BaseBdev1 00:17:43.321 06:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:43.321 06:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:43.321 BaseBdev2_malloc 00:17:43.321 06:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:43.888 true 00:17:43.888 06:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:43.888 [2024-07-23 06:30:56.388696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:43.888 [2024-07-23 06:30:56.388758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.888 [2024-07-23 06:30:56.388815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31797bc34c80 00:17:43.888 [2024-07-23 06:30:56.388823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.888 [2024-07-23 06:30:56.389604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.888 [2024-07-23 06:30:56.389642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:43.888 BaseBdev2 00:17:43.888 06:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:43.888 06:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:44.454 BaseBdev3_malloc 00:17:44.454 06:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:44.454 true 00:17:44.711 06:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:44.711 [2024-07-23 06:30:57.208896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:44.711 [2024-07-23 06:30:57.208981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.711 [2024-07-23 06:30:57.209023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31797bc35180 00:17:44.711 [2024-07-23 06:30:57.209031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.711 [2024-07-23 06:30:57.209764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.711 [2024-07-23 06:30:57.209794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:44.711 BaseBdev3 00:17:44.711 06:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:44.711 06:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:45.277 BaseBdev4_malloc 00:17:45.278 06:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:45.278 true 00:17:45.278 06:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:45.536 [2024-07-23 06:30:58.044930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:45.536 [2024-07-23 06:30:58.045039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.536 [2024-07-23 06:30:58.045092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31797bc35680 00:17:45.536 [2024-07-23 06:30:58.045103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.536 [2024-07-23 06:30:58.045812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.536 [2024-07-23 06:30:58.045842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:45.536 BaseBdev4 00:17:45.795 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:46.054 [2024-07-23 06:30:58.328954] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.054 [2024-07-23 06:30:58.329670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.054 [2024-07-23 06:30:58.329699] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.054 [2024-07-23 06:30:58.329714] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.054 [2024-07-23 06:30:58.329779] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x31797bc35900 00:17:46.054 [2024-07-23 06:30:58.329785] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:46.054 [2024-07-23 06:30:58.329843] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x31797bca0e20 00:17:46.054 [2024-07-23 06:30:58.329953] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x31797bc35900 00:17:46.054 [2024-07-23 06:30:58.329959] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x31797bc35900 00:17:46.054 [2024-07-23 06:30:58.329987] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.054 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.312 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.312 "name": "raid_bdev1", 00:17:46.312 "uuid": "1f526e62-48bd-11ef-a06c-59ddad71024c", 00:17:46.312 "strip_size_kb": 64, 00:17:46.312 "state": "online", 00:17:46.312 "raid_level": "concat", 00:17:46.312 "superblock": true, 00:17:46.312 "num_base_bdevs": 4, 00:17:46.312 "num_base_bdevs_discovered": 4, 00:17:46.312 "num_base_bdevs_operational": 4, 00:17:46.312 "base_bdevs_list": [ 00:17:46.312 { 00:17:46.312 "name": "BaseBdev1", 00:17:46.312 "uuid": "a5c4aa25-662f-925f-8b02-b42bf1e6962c", 00:17:46.312 "is_configured": true, 00:17:46.312 "data_offset": 2048, 00:17:46.312 "data_size": 63488 00:17:46.312 }, 00:17:46.312 { 00:17:46.312 "name": "BaseBdev2", 00:17:46.312 "uuid": "41a0795e-6369-795a-8084-f65eb37d1548", 00:17:46.312 "is_configured": true, 00:17:46.312 "data_offset": 2048, 00:17:46.312 "data_size": 63488 00:17:46.312 }, 00:17:46.312 { 00:17:46.312 "name": "BaseBdev3", 00:17:46.312 "uuid": "8088ccb9-8bcb-dd5a-b17a-0eb9da8a19bd", 00:17:46.312 "is_configured": true, 00:17:46.312 "data_offset": 2048, 00:17:46.312 "data_size": 63488 00:17:46.312 }, 00:17:46.312 { 00:17:46.312 "name": "BaseBdev4", 00:17:46.312 "uuid": "0b0bd988-2010-2b59-8826-1fc5ec7dadde", 00:17:46.312 "is_configured": true, 00:17:46.312 "data_offset": 2048, 00:17:46.312 "data_size": 63488 00:17:46.312 } 00:17:46.312 ] 00:17:46.312 }' 00:17:46.312 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.312 06:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.570 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:46.570 06:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:46.570 [2024-07-23 06:30:59.017267] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x31797bca0ec0 00:17:47.503 06:30:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.762 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.020 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.020 "name": "raid_bdev1", 00:17:48.020 "uuid": "1f526e62-48bd-11ef-a06c-59ddad71024c", 00:17:48.020 "strip_size_kb": 64, 00:17:48.020 "state": "online", 00:17:48.020 "raid_level": "concat", 00:17:48.020 "superblock": true, 00:17:48.020 "num_base_bdevs": 4, 00:17:48.020 "num_base_bdevs_discovered": 4, 00:17:48.020 "num_base_bdevs_operational": 4, 00:17:48.020 "base_bdevs_list": [ 00:17:48.020 { 00:17:48.020 "name": "BaseBdev1", 00:17:48.021 "uuid": "a5c4aa25-662f-925f-8b02-b42bf1e6962c", 00:17:48.021 "is_configured": true, 00:17:48.021 "data_offset": 2048, 00:17:48.021 "data_size": 63488 00:17:48.021 }, 00:17:48.021 { 00:17:48.021 "name": "BaseBdev2", 00:17:48.021 "uuid": "41a0795e-6369-795a-8084-f65eb37d1548", 00:17:48.021 "is_configured": true, 00:17:48.021 "data_offset": 2048, 00:17:48.021 "data_size": 63488 00:17:48.021 }, 00:17:48.021 { 00:17:48.021 "name": "BaseBdev3", 00:17:48.021 "uuid": "8088ccb9-8bcb-dd5a-b17a-0eb9da8a19bd", 00:17:48.021 "is_configured": true, 00:17:48.021 "data_offset": 2048, 00:17:48.021 "data_size": 63488 00:17:48.021 }, 00:17:48.021 { 00:17:48.021 "name": "BaseBdev4", 00:17:48.021 "uuid": "0b0bd988-2010-2b59-8826-1fc5ec7dadde", 00:17:48.021 "is_configured": true, 00:17:48.021 "data_offset": 2048, 00:17:48.021 "data_size": 63488 00:17:48.021 } 00:17:48.021 ] 00:17:48.021 }' 00:17:48.021 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.021 06:31:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.587 06:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:48.587 [2024-07-23 06:31:01.088268] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.587 [2024-07-23 06:31:01.088300] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.587 [2024-07-23 06:31:01.088647] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.587 [2024-07-23 06:31:01.088659] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.587 [2024-07-23 06:31:01.088668] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.587 [2024-07-23 06:31:01.088672] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31797bc35900 name raid_bdev1, state offline 00:17:48.587 0 00:17:48.587 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62927 00:17:48.587 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62927 ']' 00:17:48.587 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62927 00:17:48.587 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:48.587 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62927 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:48.845 killing process with pid 62927 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62927' 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62927 00:17:48.845 [2024-07-23 06:31:01.116286] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62927 00:17:48.845 [2024-07-23 06:31:01.142021] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3lMeIAoR86 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:17:48.845 00:17:48.845 real 0m7.729s 00:17:48.845 user 0m12.407s 00:17:48.845 sys 0m1.272s 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.845 ************************************ 00:17:48.845 END TEST raid_write_error_test 00:17:48.845 ************************************ 00:17:48.845 06:31:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.104 06:31:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:49.104 06:31:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:49.104 06:31:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:49.104 06:31:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:49.104 06:31:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.104 06:31:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.104 ************************************ 00:17:49.104 START TEST raid_state_function_test 00:17:49.104 ************************************ 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=63063 00:17:49.104 Process raid pid: 63063 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63063' 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 63063 /var/tmp/spdk-raid.sock 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 63063 ']' 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.104 06:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.104 [2024-07-23 06:31:01.400920] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:49.104 [2024-07-23 06:31:01.401194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:49.670 EAL: TSC is not safe to use in SMP mode 00:17:49.670 EAL: TSC is not invariant 00:17:49.670 [2024-07-23 06:31:01.949886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.670 [2024-07-23 06:31:02.043921] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:49.670 [2024-07-23 06:31:02.046201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.670 [2024-07-23 06:31:02.047070] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.670 [2024-07-23 06:31:02.047086] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.236 06:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.236 06:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:50.236 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:50.236 [2024-07-23 06:31:02.756000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.236 [2024-07-23 06:31:02.756077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.236 [2024-07-23 06:31:02.756098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.236 [2024-07-23 06:31:02.756107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.236 [2024-07-23 06:31:02.756111] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.236 [2024-07-23 06:31:02.756118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.236 [2024-07-23 06:31:02.756122] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:50.236 [2024-07-23 06:31:02.756129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.494 06:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.752 06:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.752 "name": "Existed_Raid", 00:17:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.752 "strip_size_kb": 0, 00:17:50.752 "state": "configuring", 00:17:50.752 "raid_level": "raid1", 00:17:50.752 "superblock": false, 00:17:50.752 "num_base_bdevs": 4, 00:17:50.752 "num_base_bdevs_discovered": 0, 00:17:50.752 "num_base_bdevs_operational": 4, 00:17:50.752 "base_bdevs_list": [ 00:17:50.752 { 00:17:50.752 "name": "BaseBdev1", 00:17:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.752 "is_configured": false, 00:17:50.752 "data_offset": 0, 00:17:50.752 "data_size": 0 00:17:50.752 }, 00:17:50.752 { 00:17:50.752 "name": "BaseBdev2", 00:17:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.752 "is_configured": false, 00:17:50.752 "data_offset": 0, 00:17:50.752 "data_size": 0 00:17:50.752 }, 00:17:50.752 { 00:17:50.752 "name": "BaseBdev3", 00:17:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.752 "is_configured": false, 00:17:50.752 "data_offset": 0, 00:17:50.752 "data_size": 0 00:17:50.752 }, 00:17:50.752 { 00:17:50.752 "name": "BaseBdev4", 00:17:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.752 "is_configured": false, 00:17:50.752 "data_offset": 0, 00:17:50.752 "data_size": 0 00:17:50.752 } 00:17:50.752 ] 00:17:50.752 }' 00:17:50.752 06:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.752 06:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.010 06:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:51.268 [2024-07-23 06:31:03.612094] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.268 [2024-07-23 06:31:03.612162] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ca660234500 name Existed_Raid, state configuring 00:17:51.268 06:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:51.526 [2024-07-23 06:31:03.876098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.526 [2024-07-23 06:31:03.876172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.526 [2024-07-23 06:31:03.876193] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.526 [2024-07-23 06:31:03.876202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.526 [2024-07-23 06:31:03.876206] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:51.526 [2024-07-23 06:31:03.876219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:51.526 [2024-07-23 06:31:03.876237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:51.526 [2024-07-23 06:31:03.876244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:51.526 06:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:51.784 [2024-07-23 06:31:04.201279] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.785 BaseBdev1 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.785 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.042 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.358 [ 00:17:52.358 { 00:17:52.358 "name": "BaseBdev1", 00:17:52.358 "aliases": [ 00:17:52.358 "22d24e19-48bd-11ef-a06c-59ddad71024c" 00:17:52.358 ], 00:17:52.358 "product_name": "Malloc disk", 00:17:52.358 "block_size": 512, 00:17:52.358 "num_blocks": 65536, 00:17:52.358 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:52.358 "assigned_rate_limits": { 00:17:52.358 "rw_ios_per_sec": 0, 00:17:52.358 "rw_mbytes_per_sec": 0, 00:17:52.358 "r_mbytes_per_sec": 0, 00:17:52.358 "w_mbytes_per_sec": 0 00:17:52.358 }, 00:17:52.358 "claimed": true, 00:17:52.358 "claim_type": "exclusive_write", 00:17:52.358 "zoned": false, 00:17:52.358 "supported_io_types": { 00:17:52.358 "read": true, 00:17:52.358 "write": true, 00:17:52.358 "unmap": true, 00:17:52.358 "flush": true, 00:17:52.358 "reset": true, 00:17:52.358 "nvme_admin": false, 00:17:52.358 "nvme_io": false, 00:17:52.358 "nvme_io_md": false, 00:17:52.358 "write_zeroes": true, 00:17:52.358 "zcopy": true, 00:17:52.358 "get_zone_info": false, 00:17:52.358 "zone_management": false, 00:17:52.358 "zone_append": false, 00:17:52.358 "compare": false, 00:17:52.358 "compare_and_write": false, 00:17:52.358 "abort": true, 00:17:52.358 "seek_hole": false, 00:17:52.358 "seek_data": false, 00:17:52.358 "copy": true, 00:17:52.358 "nvme_iov_md": false 00:17:52.358 }, 00:17:52.358 "memory_domains": [ 00:17:52.358 { 00:17:52.358 "dma_device_id": "system", 00:17:52.358 "dma_device_type": 1 00:17:52.358 }, 00:17:52.358 { 00:17:52.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.358 "dma_device_type": 2 00:17:52.358 } 00:17:52.358 ], 00:17:52.358 "driver_specific": {} 00:17:52.358 } 00:17:52.358 ] 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.358 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.617 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.617 "name": "Existed_Raid", 00:17:52.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.617 "strip_size_kb": 0, 00:17:52.617 "state": "configuring", 00:17:52.617 "raid_level": "raid1", 00:17:52.617 "superblock": false, 00:17:52.617 "num_base_bdevs": 4, 00:17:52.617 "num_base_bdevs_discovered": 1, 00:17:52.617 "num_base_bdevs_operational": 4, 00:17:52.617 "base_bdevs_list": [ 00:17:52.617 { 00:17:52.617 "name": "BaseBdev1", 00:17:52.617 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:52.617 "is_configured": true, 00:17:52.617 "data_offset": 0, 00:17:52.617 "data_size": 65536 00:17:52.617 }, 00:17:52.617 { 00:17:52.617 "name": "BaseBdev2", 00:17:52.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.617 "is_configured": false, 00:17:52.617 "data_offset": 0, 00:17:52.617 "data_size": 0 00:17:52.617 }, 00:17:52.617 { 00:17:52.617 "name": "BaseBdev3", 00:17:52.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.617 "is_configured": false, 00:17:52.617 "data_offset": 0, 00:17:52.617 "data_size": 0 00:17:52.617 }, 00:17:52.617 { 00:17:52.617 "name": "BaseBdev4", 00:17:52.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.617 "is_configured": false, 00:17:52.617 "data_offset": 0, 00:17:52.617 "data_size": 0 00:17:52.617 } 00:17:52.617 ] 00:17:52.617 }' 00:17:52.617 06:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.617 06:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.876 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.134 [2024-07-23 06:31:05.628276] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.134 [2024-07-23 06:31:05.628315] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ca660234500 name Existed_Raid, state configuring 00:17:53.134 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:53.392 [2024-07-23 06:31:05.904316] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.392 [2024-07-23 06:31:05.905219] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.392 [2024-07-23 06:31:05.905294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.392 [2024-07-23 06:31:05.905300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.392 [2024-07-23 06:31:05.905309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.393 [2024-07-23 06:31:05.905313] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:53.393 [2024-07-23 06:31:05.905320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:53.650 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:53.650 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:53.650 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:53.650 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.650 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.651 06:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.909 06:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.909 "name": "Existed_Raid", 00:17:53.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.909 "strip_size_kb": 0, 00:17:53.909 "state": "configuring", 00:17:53.909 "raid_level": "raid1", 00:17:53.909 "superblock": false, 00:17:53.909 "num_base_bdevs": 4, 00:17:53.909 "num_base_bdevs_discovered": 1, 00:17:53.909 "num_base_bdevs_operational": 4, 00:17:53.909 "base_bdevs_list": [ 00:17:53.909 { 00:17:53.909 "name": "BaseBdev1", 00:17:53.909 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:53.909 "is_configured": true, 00:17:53.909 "data_offset": 0, 00:17:53.909 "data_size": 65536 00:17:53.909 }, 00:17:53.909 { 00:17:53.909 "name": "BaseBdev2", 00:17:53.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.909 "is_configured": false, 00:17:53.909 "data_offset": 0, 00:17:53.909 "data_size": 0 00:17:53.909 }, 00:17:53.909 { 00:17:53.909 "name": "BaseBdev3", 00:17:53.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.909 "is_configured": false, 00:17:53.909 "data_offset": 0, 00:17:53.909 "data_size": 0 00:17:53.909 }, 00:17:53.909 { 00:17:53.909 "name": "BaseBdev4", 00:17:53.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.909 "is_configured": false, 00:17:53.909 "data_offset": 0, 00:17:53.909 "data_size": 0 00:17:53.909 } 00:17:53.909 ] 00:17:53.909 }' 00:17:53.909 06:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.909 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.166 06:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:54.425 [2024-07-23 06:31:06.860551] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.425 BaseBdev2 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:54.425 06:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.683 06:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.942 [ 00:17:54.942 { 00:17:54.942 "name": "BaseBdev2", 00:17:54.942 "aliases": [ 00:17:54.942 "24683af6-48bd-11ef-a06c-59ddad71024c" 00:17:54.942 ], 00:17:54.942 "product_name": "Malloc disk", 00:17:54.942 "block_size": 512, 00:17:54.942 "num_blocks": 65536, 00:17:54.942 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:17:54.942 "assigned_rate_limits": { 00:17:54.942 "rw_ios_per_sec": 0, 00:17:54.942 "rw_mbytes_per_sec": 0, 00:17:54.942 "r_mbytes_per_sec": 0, 00:17:54.942 "w_mbytes_per_sec": 0 00:17:54.942 }, 00:17:54.942 "claimed": true, 00:17:54.942 "claim_type": "exclusive_write", 00:17:54.942 "zoned": false, 00:17:54.942 "supported_io_types": { 00:17:54.942 "read": true, 00:17:54.942 "write": true, 00:17:54.942 "unmap": true, 00:17:54.942 "flush": true, 00:17:54.942 "reset": true, 00:17:54.942 "nvme_admin": false, 00:17:54.942 "nvme_io": false, 00:17:54.942 "nvme_io_md": false, 00:17:54.942 "write_zeroes": true, 00:17:54.942 "zcopy": true, 00:17:54.942 "get_zone_info": false, 00:17:54.942 "zone_management": false, 00:17:54.942 "zone_append": false, 00:17:54.942 "compare": false, 00:17:54.942 "compare_and_write": false, 00:17:54.942 "abort": true, 00:17:54.942 "seek_hole": false, 00:17:54.942 "seek_data": false, 00:17:54.942 "copy": true, 00:17:54.942 "nvme_iov_md": false 00:17:54.942 }, 00:17:54.942 "memory_domains": [ 00:17:54.942 { 00:17:54.942 "dma_device_id": "system", 00:17:54.942 "dma_device_type": 1 00:17:54.942 }, 00:17:54.942 { 00:17:54.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.942 "dma_device_type": 2 00:17:54.942 } 00:17:54.942 ], 00:17:54.942 "driver_specific": {} 00:17:54.942 } 00:17:54.942 ] 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.942 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.201 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.201 "name": "Existed_Raid", 00:17:55.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.201 "strip_size_kb": 0, 00:17:55.201 "state": "configuring", 00:17:55.201 "raid_level": "raid1", 00:17:55.201 "superblock": false, 00:17:55.201 "num_base_bdevs": 4, 00:17:55.201 "num_base_bdevs_discovered": 2, 00:17:55.201 "num_base_bdevs_operational": 4, 00:17:55.201 "base_bdevs_list": [ 00:17:55.201 { 00:17:55.201 "name": "BaseBdev1", 00:17:55.201 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:55.201 "is_configured": true, 00:17:55.201 "data_offset": 0, 00:17:55.201 "data_size": 65536 00:17:55.201 }, 00:17:55.201 { 00:17:55.201 "name": "BaseBdev2", 00:17:55.201 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:17:55.201 "is_configured": true, 00:17:55.201 "data_offset": 0, 00:17:55.201 "data_size": 65536 00:17:55.201 }, 00:17:55.201 { 00:17:55.201 "name": "BaseBdev3", 00:17:55.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.201 "is_configured": false, 00:17:55.201 "data_offset": 0, 00:17:55.201 "data_size": 0 00:17:55.201 }, 00:17:55.201 { 00:17:55.201 "name": "BaseBdev4", 00:17:55.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.201 "is_configured": false, 00:17:55.201 "data_offset": 0, 00:17:55.201 "data_size": 0 00:17:55.201 } 00:17:55.201 ] 00:17:55.201 }' 00:17:55.201 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.201 06:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.478 06:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:55.737 [2024-07-23 06:31:08.204578] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.737 BaseBdev3 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:55.737 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.995 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.253 [ 00:17:56.253 { 00:17:56.253 "name": "BaseBdev3", 00:17:56.253 "aliases": [ 00:17:56.254 "253550a3-48bd-11ef-a06c-59ddad71024c" 00:17:56.254 ], 00:17:56.254 "product_name": "Malloc disk", 00:17:56.254 "block_size": 512, 00:17:56.254 "num_blocks": 65536, 00:17:56.254 "uuid": "253550a3-48bd-11ef-a06c-59ddad71024c", 00:17:56.254 "assigned_rate_limits": { 00:17:56.254 "rw_ios_per_sec": 0, 00:17:56.254 "rw_mbytes_per_sec": 0, 00:17:56.254 "r_mbytes_per_sec": 0, 00:17:56.254 "w_mbytes_per_sec": 0 00:17:56.254 }, 00:17:56.254 "claimed": true, 00:17:56.254 "claim_type": "exclusive_write", 00:17:56.254 "zoned": false, 00:17:56.254 "supported_io_types": { 00:17:56.254 "read": true, 00:17:56.254 "write": true, 00:17:56.254 "unmap": true, 00:17:56.254 "flush": true, 00:17:56.254 "reset": true, 00:17:56.254 "nvme_admin": false, 00:17:56.254 "nvme_io": false, 00:17:56.254 "nvme_io_md": false, 00:17:56.254 "write_zeroes": true, 00:17:56.254 "zcopy": true, 00:17:56.254 "get_zone_info": false, 00:17:56.254 "zone_management": false, 00:17:56.254 "zone_append": false, 00:17:56.254 "compare": false, 00:17:56.254 "compare_and_write": false, 00:17:56.254 "abort": true, 00:17:56.254 "seek_hole": false, 00:17:56.254 "seek_data": false, 00:17:56.254 "copy": true, 00:17:56.254 "nvme_iov_md": false 00:17:56.254 }, 00:17:56.254 "memory_domains": [ 00:17:56.254 { 00:17:56.254 "dma_device_id": "system", 00:17:56.254 "dma_device_type": 1 00:17:56.254 }, 00:17:56.254 { 00:17:56.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.254 "dma_device_type": 2 00:17:56.254 } 00:17:56.254 ], 00:17:56.254 "driver_specific": {} 00:17:56.254 } 00:17:56.254 ] 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.512 06:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.770 06:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.770 "name": "Existed_Raid", 00:17:56.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.770 "strip_size_kb": 0, 00:17:56.770 "state": "configuring", 00:17:56.770 "raid_level": "raid1", 00:17:56.770 "superblock": false, 00:17:56.770 "num_base_bdevs": 4, 00:17:56.770 "num_base_bdevs_discovered": 3, 00:17:56.770 "num_base_bdevs_operational": 4, 00:17:56.770 "base_bdevs_list": [ 00:17:56.770 { 00:17:56.770 "name": "BaseBdev1", 00:17:56.770 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:56.770 "is_configured": true, 00:17:56.770 "data_offset": 0, 00:17:56.770 "data_size": 65536 00:17:56.770 }, 00:17:56.770 { 00:17:56.770 "name": "BaseBdev2", 00:17:56.770 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:17:56.770 "is_configured": true, 00:17:56.770 "data_offset": 0, 00:17:56.770 "data_size": 65536 00:17:56.770 }, 00:17:56.770 { 00:17:56.770 "name": "BaseBdev3", 00:17:56.770 "uuid": "253550a3-48bd-11ef-a06c-59ddad71024c", 00:17:56.770 "is_configured": true, 00:17:56.770 "data_offset": 0, 00:17:56.770 "data_size": 65536 00:17:56.770 }, 00:17:56.770 { 00:17:56.770 "name": "BaseBdev4", 00:17:56.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.770 "is_configured": false, 00:17:56.770 "data_offset": 0, 00:17:56.770 "data_size": 0 00:17:56.770 } 00:17:56.770 ] 00:17:56.770 }' 00:17:56.770 06:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.770 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 06:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:57.285 [2024-07-23 06:31:09.644600] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.285 [2024-07-23 06:31:09.644629] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ca660234a00 00:17:57.285 [2024-07-23 06:31:09.644634] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:57.285 [2024-07-23 06:31:09.644680] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ca660297e20 00:17:57.285 [2024-07-23 06:31:09.644789] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ca660234a00 00:17:57.285 [2024-07-23 06:31:09.644794] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3ca660234a00 00:17:57.285 [2024-07-23 06:31:09.644825] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.285 BaseBdev4 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:57.285 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.544 06:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:57.803 [ 00:17:57.803 { 00:17:57.803 "name": "BaseBdev4", 00:17:57.803 "aliases": [ 00:17:57.803 "26110b94-48bd-11ef-a06c-59ddad71024c" 00:17:57.803 ], 00:17:57.803 "product_name": "Malloc disk", 00:17:57.803 "block_size": 512, 00:17:57.803 "num_blocks": 65536, 00:17:57.803 "uuid": "26110b94-48bd-11ef-a06c-59ddad71024c", 00:17:57.803 "assigned_rate_limits": { 00:17:57.803 "rw_ios_per_sec": 0, 00:17:57.803 "rw_mbytes_per_sec": 0, 00:17:57.803 "r_mbytes_per_sec": 0, 00:17:57.803 "w_mbytes_per_sec": 0 00:17:57.803 }, 00:17:57.803 "claimed": true, 00:17:57.803 "claim_type": "exclusive_write", 00:17:57.803 "zoned": false, 00:17:57.803 "supported_io_types": { 00:17:57.803 "read": true, 00:17:57.803 "write": true, 00:17:57.803 "unmap": true, 00:17:57.803 "flush": true, 00:17:57.803 "reset": true, 00:17:57.803 "nvme_admin": false, 00:17:57.803 "nvme_io": false, 00:17:57.803 "nvme_io_md": false, 00:17:57.803 "write_zeroes": true, 00:17:57.803 "zcopy": true, 00:17:57.803 "get_zone_info": false, 00:17:57.803 "zone_management": false, 00:17:57.803 "zone_append": false, 00:17:57.803 "compare": false, 00:17:57.803 "compare_and_write": false, 00:17:57.803 "abort": true, 00:17:57.803 "seek_hole": false, 00:17:57.803 "seek_data": false, 00:17:57.803 "copy": true, 00:17:57.803 "nvme_iov_md": false 00:17:57.803 }, 00:17:57.803 "memory_domains": [ 00:17:57.803 { 00:17:57.803 "dma_device_id": "system", 00:17:57.803 "dma_device_type": 1 00:17:57.803 }, 00:17:57.803 { 00:17:57.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.803 "dma_device_type": 2 00:17:57.803 } 00:17:57.803 ], 00:17:57.803 "driver_specific": {} 00:17:57.803 } 00:17:57.803 ] 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.803 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.061 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.062 "name": "Existed_Raid", 00:17:58.062 "uuid": "26111201-48bd-11ef-a06c-59ddad71024c", 00:17:58.062 "strip_size_kb": 0, 00:17:58.062 "state": "online", 00:17:58.062 "raid_level": "raid1", 00:17:58.062 "superblock": false, 00:17:58.062 "num_base_bdevs": 4, 00:17:58.062 "num_base_bdevs_discovered": 4, 00:17:58.062 "num_base_bdevs_operational": 4, 00:17:58.062 "base_bdevs_list": [ 00:17:58.062 { 00:17:58.062 "name": "BaseBdev1", 00:17:58.062 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:58.062 "is_configured": true, 00:17:58.062 "data_offset": 0, 00:17:58.062 "data_size": 65536 00:17:58.062 }, 00:17:58.062 { 00:17:58.062 "name": "BaseBdev2", 00:17:58.062 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:17:58.062 "is_configured": true, 00:17:58.062 "data_offset": 0, 00:17:58.062 "data_size": 65536 00:17:58.062 }, 00:17:58.062 { 00:17:58.062 "name": "BaseBdev3", 00:17:58.062 "uuid": "253550a3-48bd-11ef-a06c-59ddad71024c", 00:17:58.062 "is_configured": true, 00:17:58.062 "data_offset": 0, 00:17:58.062 "data_size": 65536 00:17:58.062 }, 00:17:58.062 { 00:17:58.062 "name": "BaseBdev4", 00:17:58.062 "uuid": "26110b94-48bd-11ef-a06c-59ddad71024c", 00:17:58.062 "is_configured": true, 00:17:58.062 "data_offset": 0, 00:17:58.062 "data_size": 65536 00:17:58.062 } 00:17:58.062 ] 00:17:58.062 }' 00:17:58.062 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.062 06:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:58.628 06:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:58.628 [2024-07-23 06:31:11.092552] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.628 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:58.628 "name": "Existed_Raid", 00:17:58.628 "aliases": [ 00:17:58.628 "26111201-48bd-11ef-a06c-59ddad71024c" 00:17:58.628 ], 00:17:58.628 "product_name": "Raid Volume", 00:17:58.628 "block_size": 512, 00:17:58.628 "num_blocks": 65536, 00:17:58.628 "uuid": "26111201-48bd-11ef-a06c-59ddad71024c", 00:17:58.628 "assigned_rate_limits": { 00:17:58.628 "rw_ios_per_sec": 0, 00:17:58.628 "rw_mbytes_per_sec": 0, 00:17:58.628 "r_mbytes_per_sec": 0, 00:17:58.628 "w_mbytes_per_sec": 0 00:17:58.628 }, 00:17:58.628 "claimed": false, 00:17:58.628 "zoned": false, 00:17:58.628 "supported_io_types": { 00:17:58.628 "read": true, 00:17:58.628 "write": true, 00:17:58.628 "unmap": false, 00:17:58.628 "flush": false, 00:17:58.628 "reset": true, 00:17:58.628 "nvme_admin": false, 00:17:58.628 "nvme_io": false, 00:17:58.628 "nvme_io_md": false, 00:17:58.628 "write_zeroes": true, 00:17:58.628 "zcopy": false, 00:17:58.628 "get_zone_info": false, 00:17:58.628 "zone_management": false, 00:17:58.628 "zone_append": false, 00:17:58.628 "compare": false, 00:17:58.628 "compare_and_write": false, 00:17:58.628 "abort": false, 00:17:58.628 "seek_hole": false, 00:17:58.628 "seek_data": false, 00:17:58.628 "copy": false, 00:17:58.628 "nvme_iov_md": false 00:17:58.628 }, 00:17:58.628 "memory_domains": [ 00:17:58.628 { 00:17:58.628 "dma_device_id": "system", 00:17:58.628 "dma_device_type": 1 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.628 "dma_device_type": 2 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "system", 00:17:58.628 "dma_device_type": 1 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.628 "dma_device_type": 2 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "system", 00:17:58.628 "dma_device_type": 1 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.628 "dma_device_type": 2 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "system", 00:17:58.628 "dma_device_type": 1 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.628 "dma_device_type": 2 00:17:58.628 } 00:17:58.628 ], 00:17:58.628 "driver_specific": { 00:17:58.628 "raid": { 00:17:58.628 "uuid": "26111201-48bd-11ef-a06c-59ddad71024c", 00:17:58.628 "strip_size_kb": 0, 00:17:58.628 "state": "online", 00:17:58.628 "raid_level": "raid1", 00:17:58.628 "superblock": false, 00:17:58.628 "num_base_bdevs": 4, 00:17:58.628 "num_base_bdevs_discovered": 4, 00:17:58.628 "num_base_bdevs_operational": 4, 00:17:58.628 "base_bdevs_list": [ 00:17:58.628 { 00:17:58.628 "name": "BaseBdev1", 00:17:58.628 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:58.628 "is_configured": true, 00:17:58.628 "data_offset": 0, 00:17:58.628 "data_size": 65536 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "name": "BaseBdev2", 00:17:58.628 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:17:58.628 "is_configured": true, 00:17:58.628 "data_offset": 0, 00:17:58.628 "data_size": 65536 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "name": "BaseBdev3", 00:17:58.628 "uuid": "253550a3-48bd-11ef-a06c-59ddad71024c", 00:17:58.628 "is_configured": true, 00:17:58.628 "data_offset": 0, 00:17:58.628 "data_size": 65536 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "name": "BaseBdev4", 00:17:58.628 "uuid": "26110b94-48bd-11ef-a06c-59ddad71024c", 00:17:58.628 "is_configured": true, 00:17:58.628 "data_offset": 0, 00:17:58.628 "data_size": 65536 00:17:58.628 } 00:17:58.628 ] 00:17:58.628 } 00:17:58.628 } 00:17:58.628 }' 00:17:58.628 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.628 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:58.628 BaseBdev2 00:17:58.628 BaseBdev3 00:17:58.628 BaseBdev4' 00:17:58.628 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.628 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.628 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:58.886 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.886 "name": "BaseBdev1", 00:17:58.886 "aliases": [ 00:17:58.886 "22d24e19-48bd-11ef-a06c-59ddad71024c" 00:17:58.886 ], 00:17:58.886 "product_name": "Malloc disk", 00:17:58.886 "block_size": 512, 00:17:58.886 "num_blocks": 65536, 00:17:58.886 "uuid": "22d24e19-48bd-11ef-a06c-59ddad71024c", 00:17:58.886 "assigned_rate_limits": { 00:17:58.886 "rw_ios_per_sec": 0, 00:17:58.886 "rw_mbytes_per_sec": 0, 00:17:58.886 "r_mbytes_per_sec": 0, 00:17:58.886 "w_mbytes_per_sec": 0 00:17:58.886 }, 00:17:58.886 "claimed": true, 00:17:58.886 "claim_type": "exclusive_write", 00:17:58.886 "zoned": false, 00:17:58.886 "supported_io_types": { 00:17:58.886 "read": true, 00:17:58.886 "write": true, 00:17:58.886 "unmap": true, 00:17:58.886 "flush": true, 00:17:58.886 "reset": true, 00:17:58.886 "nvme_admin": false, 00:17:58.886 "nvme_io": false, 00:17:58.886 "nvme_io_md": false, 00:17:58.886 "write_zeroes": true, 00:17:58.886 "zcopy": true, 00:17:58.886 "get_zone_info": false, 00:17:58.886 "zone_management": false, 00:17:58.886 "zone_append": false, 00:17:58.886 "compare": false, 00:17:58.886 "compare_and_write": false, 00:17:58.886 "abort": true, 00:17:58.886 "seek_hole": false, 00:17:58.886 "seek_data": false, 00:17:58.886 "copy": true, 00:17:58.886 "nvme_iov_md": false 00:17:58.886 }, 00:17:58.886 "memory_domains": [ 00:17:58.886 { 00:17:58.886 "dma_device_id": "system", 00:17:58.886 "dma_device_type": 1 00:17:58.886 }, 00:17:58.886 { 00:17:58.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.886 "dma_device_type": 2 00:17:58.886 } 00:17:58.886 ], 00:17:58.886 "driver_specific": {} 00:17:58.886 }' 00:17:58.886 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:59.144 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.442 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.443 "name": "BaseBdev2", 00:17:59.443 "aliases": [ 00:17:59.443 "24683af6-48bd-11ef-a06c-59ddad71024c" 00:17:59.443 ], 00:17:59.443 "product_name": "Malloc disk", 00:17:59.443 "block_size": 512, 00:17:59.443 "num_blocks": 65536, 00:17:59.443 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:17:59.443 "assigned_rate_limits": { 00:17:59.443 "rw_ios_per_sec": 0, 00:17:59.443 "rw_mbytes_per_sec": 0, 00:17:59.443 "r_mbytes_per_sec": 0, 00:17:59.443 "w_mbytes_per_sec": 0 00:17:59.443 }, 00:17:59.443 "claimed": true, 00:17:59.443 "claim_type": "exclusive_write", 00:17:59.443 "zoned": false, 00:17:59.443 "supported_io_types": { 00:17:59.443 "read": true, 00:17:59.443 "write": true, 00:17:59.443 "unmap": true, 00:17:59.443 "flush": true, 00:17:59.443 "reset": true, 00:17:59.443 "nvme_admin": false, 00:17:59.443 "nvme_io": false, 00:17:59.443 "nvme_io_md": false, 00:17:59.443 "write_zeroes": true, 00:17:59.443 "zcopy": true, 00:17:59.443 "get_zone_info": false, 00:17:59.443 "zone_management": false, 00:17:59.443 "zone_append": false, 00:17:59.443 "compare": false, 00:17:59.443 "compare_and_write": false, 00:17:59.443 "abort": true, 00:17:59.443 "seek_hole": false, 00:17:59.443 "seek_data": false, 00:17:59.443 "copy": true, 00:17:59.443 "nvme_iov_md": false 00:17:59.443 }, 00:17:59.443 "memory_domains": [ 00:17:59.443 { 00:17:59.443 "dma_device_id": "system", 00:17:59.443 "dma_device_type": 1 00:17:59.443 }, 00:17:59.443 { 00:17:59.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.443 "dma_device_type": 2 00:17:59.443 } 00:17:59.443 ], 00:17:59.443 "driver_specific": {} 00:17:59.443 }' 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:59.443 06:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.703 "name": "BaseBdev3", 00:17:59.703 "aliases": [ 00:17:59.703 "253550a3-48bd-11ef-a06c-59ddad71024c" 00:17:59.703 ], 00:17:59.703 "product_name": "Malloc disk", 00:17:59.703 "block_size": 512, 00:17:59.703 "num_blocks": 65536, 00:17:59.703 "uuid": "253550a3-48bd-11ef-a06c-59ddad71024c", 00:17:59.703 "assigned_rate_limits": { 00:17:59.703 "rw_ios_per_sec": 0, 00:17:59.703 "rw_mbytes_per_sec": 0, 00:17:59.703 "r_mbytes_per_sec": 0, 00:17:59.703 "w_mbytes_per_sec": 0 00:17:59.703 }, 00:17:59.703 "claimed": true, 00:17:59.703 "claim_type": "exclusive_write", 00:17:59.703 "zoned": false, 00:17:59.703 "supported_io_types": { 00:17:59.703 "read": true, 00:17:59.703 "write": true, 00:17:59.703 "unmap": true, 00:17:59.703 "flush": true, 00:17:59.703 "reset": true, 00:17:59.703 "nvme_admin": false, 00:17:59.703 "nvme_io": false, 00:17:59.703 "nvme_io_md": false, 00:17:59.703 "write_zeroes": true, 00:17:59.703 "zcopy": true, 00:17:59.703 "get_zone_info": false, 00:17:59.703 "zone_management": false, 00:17:59.703 "zone_append": false, 00:17:59.703 "compare": false, 00:17:59.703 "compare_and_write": false, 00:17:59.703 "abort": true, 00:17:59.703 "seek_hole": false, 00:17:59.703 "seek_data": false, 00:17:59.703 "copy": true, 00:17:59.703 "nvme_iov_md": false 00:17:59.703 }, 00:17:59.703 "memory_domains": [ 00:17:59.703 { 00:17:59.703 "dma_device_id": "system", 00:17:59.703 "dma_device_type": 1 00:17:59.703 }, 00:17:59.703 { 00:17:59.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.703 "dma_device_type": 2 00:17:59.703 } 00:17:59.703 ], 00:17:59.703 "driver_specific": {} 00:17:59.703 }' 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:59.703 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.961 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.961 "name": "BaseBdev4", 00:17:59.961 "aliases": [ 00:17:59.961 "26110b94-48bd-11ef-a06c-59ddad71024c" 00:17:59.961 ], 00:17:59.961 "product_name": "Malloc disk", 00:17:59.961 "block_size": 512, 00:17:59.961 "num_blocks": 65536, 00:17:59.961 "uuid": "26110b94-48bd-11ef-a06c-59ddad71024c", 00:17:59.961 "assigned_rate_limits": { 00:17:59.961 "rw_ios_per_sec": 0, 00:17:59.961 "rw_mbytes_per_sec": 0, 00:17:59.961 "r_mbytes_per_sec": 0, 00:17:59.961 "w_mbytes_per_sec": 0 00:17:59.961 }, 00:17:59.961 "claimed": true, 00:17:59.961 "claim_type": "exclusive_write", 00:17:59.961 "zoned": false, 00:17:59.961 "supported_io_types": { 00:17:59.961 "read": true, 00:17:59.961 "write": true, 00:17:59.961 "unmap": true, 00:17:59.961 "flush": true, 00:17:59.961 "reset": true, 00:17:59.961 "nvme_admin": false, 00:17:59.962 "nvme_io": false, 00:17:59.962 "nvme_io_md": false, 00:17:59.962 "write_zeroes": true, 00:17:59.962 "zcopy": true, 00:17:59.962 "get_zone_info": false, 00:17:59.962 "zone_management": false, 00:17:59.962 "zone_append": false, 00:17:59.962 "compare": false, 00:17:59.962 "compare_and_write": false, 00:17:59.962 "abort": true, 00:17:59.962 "seek_hole": false, 00:17:59.962 "seek_data": false, 00:17:59.962 "copy": true, 00:17:59.962 "nvme_iov_md": false 00:17:59.962 }, 00:17:59.962 "memory_domains": [ 00:17:59.962 { 00:17:59.962 "dma_device_id": "system", 00:17:59.962 "dma_device_type": 1 00:17:59.962 }, 00:17:59.962 { 00:17:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.962 "dma_device_type": 2 00:17:59.962 } 00:17:59.962 ], 00:17:59.962 "driver_specific": {} 00:17:59.962 }' 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.962 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:00.220 [2024-07-23 06:31:12.636584] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.220 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.478 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.478 "name": "Existed_Raid", 00:18:00.478 "uuid": "26111201-48bd-11ef-a06c-59ddad71024c", 00:18:00.478 "strip_size_kb": 0, 00:18:00.478 "state": "online", 00:18:00.478 "raid_level": "raid1", 00:18:00.478 "superblock": false, 00:18:00.478 "num_base_bdevs": 4, 00:18:00.478 "num_base_bdevs_discovered": 3, 00:18:00.478 "num_base_bdevs_operational": 3, 00:18:00.478 "base_bdevs_list": [ 00:18:00.478 { 00:18:00.478 "name": null, 00:18:00.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.478 "is_configured": false, 00:18:00.478 "data_offset": 0, 00:18:00.478 "data_size": 65536 00:18:00.478 }, 00:18:00.478 { 00:18:00.478 "name": "BaseBdev2", 00:18:00.478 "uuid": "24683af6-48bd-11ef-a06c-59ddad71024c", 00:18:00.478 "is_configured": true, 00:18:00.478 "data_offset": 0, 00:18:00.478 "data_size": 65536 00:18:00.478 }, 00:18:00.478 { 00:18:00.478 "name": "BaseBdev3", 00:18:00.478 "uuid": "253550a3-48bd-11ef-a06c-59ddad71024c", 00:18:00.478 "is_configured": true, 00:18:00.478 "data_offset": 0, 00:18:00.478 "data_size": 65536 00:18:00.478 }, 00:18:00.478 { 00:18:00.478 "name": "BaseBdev4", 00:18:00.478 "uuid": "26110b94-48bd-11ef-a06c-59ddad71024c", 00:18:00.478 "is_configured": true, 00:18:00.478 "data_offset": 0, 00:18:00.478 "data_size": 65536 00:18:00.478 } 00:18:00.478 ] 00:18:00.478 }' 00:18:00.478 06:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.478 06:31:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.044 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:01.044 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:01.044 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.044 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:01.302 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:01.302 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.302 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:01.560 [2024-07-23 06:31:13.835072] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.560 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:01.560 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:01.560 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.560 06:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:01.818 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:01.818 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.818 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:02.076 [2024-07-23 06:31:14.429159] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:02.076 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:02.076 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:02.076 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.076 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:02.334 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:02.334 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.334 06:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:02.592 [2024-07-23 06:31:14.999544] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:02.592 [2024-07-23 06:31:14.999599] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.592 [2024-07-23 06:31:15.005868] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.592 [2024-07-23 06:31:15.005887] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.592 [2024-07-23 06:31:15.005907] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ca660234a00 name Existed_Raid, state offline 00:18:02.592 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:02.592 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:02.592 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.592 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.849 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:02.849 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:02.849 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:02.849 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:02.849 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:02.849 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:03.107 BaseBdev2 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.107 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.366 06:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.624 [ 00:18:03.624 { 00:18:03.624 "name": "BaseBdev2", 00:18:03.624 "aliases": [ 00:18:03.624 "2994ec17-48bd-11ef-a06c-59ddad71024c" 00:18:03.624 ], 00:18:03.624 "product_name": "Malloc disk", 00:18:03.624 "block_size": 512, 00:18:03.624 "num_blocks": 65536, 00:18:03.624 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:03.624 "assigned_rate_limits": { 00:18:03.624 "rw_ios_per_sec": 0, 00:18:03.624 "rw_mbytes_per_sec": 0, 00:18:03.624 "r_mbytes_per_sec": 0, 00:18:03.624 "w_mbytes_per_sec": 0 00:18:03.624 }, 00:18:03.624 "claimed": false, 00:18:03.624 "zoned": false, 00:18:03.624 "supported_io_types": { 00:18:03.624 "read": true, 00:18:03.624 "write": true, 00:18:03.624 "unmap": true, 00:18:03.624 "flush": true, 00:18:03.624 "reset": true, 00:18:03.624 "nvme_admin": false, 00:18:03.624 "nvme_io": false, 00:18:03.624 "nvme_io_md": false, 00:18:03.624 "write_zeroes": true, 00:18:03.624 "zcopy": true, 00:18:03.624 "get_zone_info": false, 00:18:03.624 "zone_management": false, 00:18:03.624 "zone_append": false, 00:18:03.624 "compare": false, 00:18:03.624 "compare_and_write": false, 00:18:03.624 "abort": true, 00:18:03.624 "seek_hole": false, 00:18:03.624 "seek_data": false, 00:18:03.624 "copy": true, 00:18:03.624 "nvme_iov_md": false 00:18:03.624 }, 00:18:03.624 "memory_domains": [ 00:18:03.624 { 00:18:03.624 "dma_device_id": "system", 00:18:03.624 "dma_device_type": 1 00:18:03.624 }, 00:18:03.624 { 00:18:03.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.624 "dma_device_type": 2 00:18:03.624 } 00:18:03.624 ], 00:18:03.624 "driver_specific": {} 00:18:03.624 } 00:18:03.624 ] 00:18:03.624 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:03.624 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:03.624 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:03.624 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:03.883 BaseBdev3 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.883 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.142 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:04.400 [ 00:18:04.400 { 00:18:04.400 "name": "BaseBdev3", 00:18:04.400 "aliases": [ 00:18:04.400 "29fe8615-48bd-11ef-a06c-59ddad71024c" 00:18:04.400 ], 00:18:04.400 "product_name": "Malloc disk", 00:18:04.400 "block_size": 512, 00:18:04.400 "num_blocks": 65536, 00:18:04.400 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:04.400 "assigned_rate_limits": { 00:18:04.400 "rw_ios_per_sec": 0, 00:18:04.400 "rw_mbytes_per_sec": 0, 00:18:04.400 "r_mbytes_per_sec": 0, 00:18:04.400 "w_mbytes_per_sec": 0 00:18:04.400 }, 00:18:04.400 "claimed": false, 00:18:04.400 "zoned": false, 00:18:04.400 "supported_io_types": { 00:18:04.400 "read": true, 00:18:04.400 "write": true, 00:18:04.400 "unmap": true, 00:18:04.400 "flush": true, 00:18:04.400 "reset": true, 00:18:04.400 "nvme_admin": false, 00:18:04.400 "nvme_io": false, 00:18:04.400 "nvme_io_md": false, 00:18:04.400 "write_zeroes": true, 00:18:04.400 "zcopy": true, 00:18:04.400 "get_zone_info": false, 00:18:04.400 "zone_management": false, 00:18:04.400 "zone_append": false, 00:18:04.400 "compare": false, 00:18:04.400 "compare_and_write": false, 00:18:04.400 "abort": true, 00:18:04.400 "seek_hole": false, 00:18:04.400 "seek_data": false, 00:18:04.400 "copy": true, 00:18:04.400 "nvme_iov_md": false 00:18:04.400 }, 00:18:04.400 "memory_domains": [ 00:18:04.400 { 00:18:04.400 "dma_device_id": "system", 00:18:04.400 "dma_device_type": 1 00:18:04.400 }, 00:18:04.400 { 00:18:04.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.400 "dma_device_type": 2 00:18:04.400 } 00:18:04.400 ], 00:18:04.400 "driver_specific": {} 00:18:04.400 } 00:18:04.400 ] 00:18:04.400 06:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:04.400 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:04.400 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:04.400 06:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:04.659 BaseBdev4 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:04.659 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.918 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:05.176 [ 00:18:05.176 { 00:18:05.176 "name": "BaseBdev4", 00:18:05.176 "aliases": [ 00:18:05.176 "2a7d7cbf-48bd-11ef-a06c-59ddad71024c" 00:18:05.176 ], 00:18:05.176 "product_name": "Malloc disk", 00:18:05.176 "block_size": 512, 00:18:05.176 "num_blocks": 65536, 00:18:05.176 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:05.176 "assigned_rate_limits": { 00:18:05.176 "rw_ios_per_sec": 0, 00:18:05.176 "rw_mbytes_per_sec": 0, 00:18:05.176 "r_mbytes_per_sec": 0, 00:18:05.176 "w_mbytes_per_sec": 0 00:18:05.176 }, 00:18:05.176 "claimed": false, 00:18:05.176 "zoned": false, 00:18:05.176 "supported_io_types": { 00:18:05.176 "read": true, 00:18:05.176 "write": true, 00:18:05.176 "unmap": true, 00:18:05.176 "flush": true, 00:18:05.176 "reset": true, 00:18:05.176 "nvme_admin": false, 00:18:05.176 "nvme_io": false, 00:18:05.176 "nvme_io_md": false, 00:18:05.176 "write_zeroes": true, 00:18:05.176 "zcopy": true, 00:18:05.176 "get_zone_info": false, 00:18:05.176 "zone_management": false, 00:18:05.176 "zone_append": false, 00:18:05.176 "compare": false, 00:18:05.176 "compare_and_write": false, 00:18:05.176 "abort": true, 00:18:05.176 "seek_hole": false, 00:18:05.176 "seek_data": false, 00:18:05.176 "copy": true, 00:18:05.176 "nvme_iov_md": false 00:18:05.176 }, 00:18:05.176 "memory_domains": [ 00:18:05.176 { 00:18:05.176 "dma_device_id": "system", 00:18:05.176 "dma_device_type": 1 00:18:05.176 }, 00:18:05.176 { 00:18:05.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.176 "dma_device_type": 2 00:18:05.176 } 00:18:05.176 ], 00:18:05.176 "driver_specific": {} 00:18:05.176 } 00:18:05.176 ] 00:18:05.176 06:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:05.176 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:05.176 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.176 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.435 [2024-07-23 06:31:17.902124] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.435 [2024-07-23 06:31:17.902176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.435 [2024-07-23 06:31:17.902186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.435 [2024-07-23 06:31:17.902759] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.435 [2024-07-23 06:31:17.902780] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.435 06:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.693 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.693 "name": "Existed_Raid", 00:18:05.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.693 "strip_size_kb": 0, 00:18:05.693 "state": "configuring", 00:18:05.693 "raid_level": "raid1", 00:18:05.693 "superblock": false, 00:18:05.693 "num_base_bdevs": 4, 00:18:05.693 "num_base_bdevs_discovered": 3, 00:18:05.693 "num_base_bdevs_operational": 4, 00:18:05.693 "base_bdevs_list": [ 00:18:05.693 { 00:18:05.693 "name": "BaseBdev1", 00:18:05.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.693 "is_configured": false, 00:18:05.693 "data_offset": 0, 00:18:05.693 "data_size": 0 00:18:05.693 }, 00:18:05.693 { 00:18:05.693 "name": "BaseBdev2", 00:18:05.693 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:05.693 "is_configured": true, 00:18:05.693 "data_offset": 0, 00:18:05.693 "data_size": 65536 00:18:05.693 }, 00:18:05.693 { 00:18:05.693 "name": "BaseBdev3", 00:18:05.693 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:05.693 "is_configured": true, 00:18:05.693 "data_offset": 0, 00:18:05.693 "data_size": 65536 00:18:05.693 }, 00:18:05.693 { 00:18:05.693 "name": "BaseBdev4", 00:18:05.693 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:05.693 "is_configured": true, 00:18:05.693 "data_offset": 0, 00:18:05.693 "data_size": 65536 00:18:05.693 } 00:18:05.693 ] 00:18:05.693 }' 00:18:05.693 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.693 06:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:06.258 [2024-07-23 06:31:18.762186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.258 06:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.824 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.824 "name": "Existed_Raid", 00:18:06.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.824 "strip_size_kb": 0, 00:18:06.824 "state": "configuring", 00:18:06.824 "raid_level": "raid1", 00:18:06.824 "superblock": false, 00:18:06.824 "num_base_bdevs": 4, 00:18:06.824 "num_base_bdevs_discovered": 2, 00:18:06.824 "num_base_bdevs_operational": 4, 00:18:06.824 "base_bdevs_list": [ 00:18:06.824 { 00:18:06.824 "name": "BaseBdev1", 00:18:06.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.824 "is_configured": false, 00:18:06.824 "data_offset": 0, 00:18:06.824 "data_size": 0 00:18:06.824 }, 00:18:06.824 { 00:18:06.824 "name": null, 00:18:06.824 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:06.824 "is_configured": false, 00:18:06.824 "data_offset": 0, 00:18:06.824 "data_size": 65536 00:18:06.824 }, 00:18:06.824 { 00:18:06.824 "name": "BaseBdev3", 00:18:06.824 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:06.824 "is_configured": true, 00:18:06.824 "data_offset": 0, 00:18:06.824 "data_size": 65536 00:18:06.824 }, 00:18:06.824 { 00:18:06.824 "name": "BaseBdev4", 00:18:06.824 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:06.824 "is_configured": true, 00:18:06.824 "data_offset": 0, 00:18:06.824 "data_size": 65536 00:18:06.824 } 00:18:06.824 ] 00:18:06.824 }' 00:18:06.824 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.824 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.082 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.082 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:07.340 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:07.340 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.599 [2024-07-23 06:31:19.898447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.599 BaseBdev1 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.599 06:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.857 06:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.114 [ 00:18:08.114 { 00:18:08.114 "name": "BaseBdev1", 00:18:08.114 "aliases": [ 00:18:08.114 "2c2da84b-48bd-11ef-a06c-59ddad71024c" 00:18:08.114 ], 00:18:08.114 "product_name": "Malloc disk", 00:18:08.114 "block_size": 512, 00:18:08.114 "num_blocks": 65536, 00:18:08.114 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:08.114 "assigned_rate_limits": { 00:18:08.114 "rw_ios_per_sec": 0, 00:18:08.114 "rw_mbytes_per_sec": 0, 00:18:08.114 "r_mbytes_per_sec": 0, 00:18:08.114 "w_mbytes_per_sec": 0 00:18:08.114 }, 00:18:08.114 "claimed": true, 00:18:08.114 "claim_type": "exclusive_write", 00:18:08.114 "zoned": false, 00:18:08.114 "supported_io_types": { 00:18:08.114 "read": true, 00:18:08.114 "write": true, 00:18:08.114 "unmap": true, 00:18:08.114 "flush": true, 00:18:08.114 "reset": true, 00:18:08.114 "nvme_admin": false, 00:18:08.114 "nvme_io": false, 00:18:08.114 "nvme_io_md": false, 00:18:08.114 "write_zeroes": true, 00:18:08.114 "zcopy": true, 00:18:08.114 "get_zone_info": false, 00:18:08.114 "zone_management": false, 00:18:08.114 "zone_append": false, 00:18:08.114 "compare": false, 00:18:08.114 "compare_and_write": false, 00:18:08.114 "abort": true, 00:18:08.114 "seek_hole": false, 00:18:08.114 "seek_data": false, 00:18:08.114 "copy": true, 00:18:08.114 "nvme_iov_md": false 00:18:08.114 }, 00:18:08.114 "memory_domains": [ 00:18:08.114 { 00:18:08.114 "dma_device_id": "system", 00:18:08.114 "dma_device_type": 1 00:18:08.114 }, 00:18:08.114 { 00:18:08.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.114 "dma_device_type": 2 00:18:08.114 } 00:18:08.114 ], 00:18:08.114 "driver_specific": {} 00:18:08.114 } 00:18:08.114 ] 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.114 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.372 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.372 "name": "Existed_Raid", 00:18:08.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.372 "strip_size_kb": 0, 00:18:08.372 "state": "configuring", 00:18:08.372 "raid_level": "raid1", 00:18:08.372 "superblock": false, 00:18:08.372 "num_base_bdevs": 4, 00:18:08.372 "num_base_bdevs_discovered": 3, 00:18:08.372 "num_base_bdevs_operational": 4, 00:18:08.372 "base_bdevs_list": [ 00:18:08.372 { 00:18:08.372 "name": "BaseBdev1", 00:18:08.372 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:08.372 "is_configured": true, 00:18:08.372 "data_offset": 0, 00:18:08.372 "data_size": 65536 00:18:08.372 }, 00:18:08.372 { 00:18:08.372 "name": null, 00:18:08.372 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:08.372 "is_configured": false, 00:18:08.372 "data_offset": 0, 00:18:08.372 "data_size": 65536 00:18:08.372 }, 00:18:08.372 { 00:18:08.372 "name": "BaseBdev3", 00:18:08.372 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:08.372 "is_configured": true, 00:18:08.372 "data_offset": 0, 00:18:08.372 "data_size": 65536 00:18:08.372 }, 00:18:08.372 { 00:18:08.372 "name": "BaseBdev4", 00:18:08.372 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:08.372 "is_configured": true, 00:18:08.372 "data_offset": 0, 00:18:08.372 "data_size": 65536 00:18:08.372 } 00:18:08.372 ] 00:18:08.372 }' 00:18:08.372 06:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.372 06:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.630 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.630 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:08.888 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:08.888 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:09.146 [2024-07-23 06:31:21.578342] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.146 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.404 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.404 "name": "Existed_Raid", 00:18:09.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.404 "strip_size_kb": 0, 00:18:09.404 "state": "configuring", 00:18:09.404 "raid_level": "raid1", 00:18:09.404 "superblock": false, 00:18:09.404 "num_base_bdevs": 4, 00:18:09.404 "num_base_bdevs_discovered": 2, 00:18:09.404 "num_base_bdevs_operational": 4, 00:18:09.404 "base_bdevs_list": [ 00:18:09.404 { 00:18:09.404 "name": "BaseBdev1", 00:18:09.404 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:09.404 "is_configured": true, 00:18:09.404 "data_offset": 0, 00:18:09.404 "data_size": 65536 00:18:09.404 }, 00:18:09.404 { 00:18:09.404 "name": null, 00:18:09.404 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:09.404 "is_configured": false, 00:18:09.404 "data_offset": 0, 00:18:09.404 "data_size": 65536 00:18:09.404 }, 00:18:09.404 { 00:18:09.404 "name": null, 00:18:09.404 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:09.404 "is_configured": false, 00:18:09.404 "data_offset": 0, 00:18:09.404 "data_size": 65536 00:18:09.404 }, 00:18:09.404 { 00:18:09.404 "name": "BaseBdev4", 00:18:09.404 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:09.404 "is_configured": true, 00:18:09.404 "data_offset": 0, 00:18:09.404 "data_size": 65536 00:18:09.404 } 00:18:09.404 ] 00:18:09.404 }' 00:18:09.404 06:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.404 06:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.970 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.970 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.229 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:10.229 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:10.488 [2024-07-23 06:31:22.758379] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.488 06:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.746 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.746 "name": "Existed_Raid", 00:18:10.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.746 "strip_size_kb": 0, 00:18:10.746 "state": "configuring", 00:18:10.746 "raid_level": "raid1", 00:18:10.746 "superblock": false, 00:18:10.746 "num_base_bdevs": 4, 00:18:10.746 "num_base_bdevs_discovered": 3, 00:18:10.746 "num_base_bdevs_operational": 4, 00:18:10.746 "base_bdevs_list": [ 00:18:10.746 { 00:18:10.746 "name": "BaseBdev1", 00:18:10.746 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:10.746 "is_configured": true, 00:18:10.746 "data_offset": 0, 00:18:10.746 "data_size": 65536 00:18:10.746 }, 00:18:10.746 { 00:18:10.746 "name": null, 00:18:10.746 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:10.746 "is_configured": false, 00:18:10.746 "data_offset": 0, 00:18:10.746 "data_size": 65536 00:18:10.746 }, 00:18:10.746 { 00:18:10.746 "name": "BaseBdev3", 00:18:10.746 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:10.746 "is_configured": true, 00:18:10.746 "data_offset": 0, 00:18:10.746 "data_size": 65536 00:18:10.746 }, 00:18:10.746 { 00:18:10.746 "name": "BaseBdev4", 00:18:10.746 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:10.746 "is_configured": true, 00:18:10.746 "data_offset": 0, 00:18:10.746 "data_size": 65536 00:18:10.746 } 00:18:10.746 ] 00:18:10.746 }' 00:18:10.746 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.746 06:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.004 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.004 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:11.262 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:11.262 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:11.535 [2024-07-23 06:31:23.938449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.535 06:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.794 06:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.794 "name": "Existed_Raid", 00:18:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.794 "strip_size_kb": 0, 00:18:11.794 "state": "configuring", 00:18:11.794 "raid_level": "raid1", 00:18:11.794 "superblock": false, 00:18:11.794 "num_base_bdevs": 4, 00:18:11.794 "num_base_bdevs_discovered": 2, 00:18:11.794 "num_base_bdevs_operational": 4, 00:18:11.794 "base_bdevs_list": [ 00:18:11.794 { 00:18:11.794 "name": null, 00:18:11.794 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:11.794 "is_configured": false, 00:18:11.794 "data_offset": 0, 00:18:11.794 "data_size": 65536 00:18:11.794 }, 00:18:11.794 { 00:18:11.794 "name": null, 00:18:11.794 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:11.794 "is_configured": false, 00:18:11.794 "data_offset": 0, 00:18:11.794 "data_size": 65536 00:18:11.794 }, 00:18:11.794 { 00:18:11.794 "name": "BaseBdev3", 00:18:11.794 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:11.794 "is_configured": true, 00:18:11.794 "data_offset": 0, 00:18:11.794 "data_size": 65536 00:18:11.794 }, 00:18:11.794 { 00:18:11.794 "name": "BaseBdev4", 00:18:11.794 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:11.794 "is_configured": true, 00:18:11.794 "data_offset": 0, 00:18:11.794 "data_size": 65536 00:18:11.794 } 00:18:11.794 ] 00:18:11.794 }' 00:18:11.794 06:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.794 06:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.053 06:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:12.053 06:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.619 06:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:12.619 06:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:12.878 [2024-07-23 06:31:25.144435] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.878 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.136 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.136 "name": "Existed_Raid", 00:18:13.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.136 "strip_size_kb": 0, 00:18:13.136 "state": "configuring", 00:18:13.136 "raid_level": "raid1", 00:18:13.136 "superblock": false, 00:18:13.136 "num_base_bdevs": 4, 00:18:13.136 "num_base_bdevs_discovered": 3, 00:18:13.136 "num_base_bdevs_operational": 4, 00:18:13.136 "base_bdevs_list": [ 00:18:13.136 { 00:18:13.136 "name": null, 00:18:13.136 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:13.136 "is_configured": false, 00:18:13.136 "data_offset": 0, 00:18:13.136 "data_size": 65536 00:18:13.136 }, 00:18:13.136 { 00:18:13.136 "name": "BaseBdev2", 00:18:13.136 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:13.136 "is_configured": true, 00:18:13.136 "data_offset": 0, 00:18:13.136 "data_size": 65536 00:18:13.136 }, 00:18:13.136 { 00:18:13.136 "name": "BaseBdev3", 00:18:13.136 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:13.136 "is_configured": true, 00:18:13.136 "data_offset": 0, 00:18:13.136 "data_size": 65536 00:18:13.136 }, 00:18:13.136 { 00:18:13.136 "name": "BaseBdev4", 00:18:13.136 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:13.136 "is_configured": true, 00:18:13.136 "data_offset": 0, 00:18:13.136 "data_size": 65536 00:18:13.136 } 00:18:13.136 ] 00:18:13.136 }' 00:18:13.136 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.136 06:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.393 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:13.393 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.650 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:13.651 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.651 06:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:13.909 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2c2da84b-48bd-11ef-a06c-59ddad71024c 00:18:14.167 [2024-07-23 06:31:26.456623] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:14.167 [2024-07-23 06:31:26.456656] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ca660234f00 00:18:14.167 [2024-07-23 06:31:26.456661] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:14.167 [2024-07-23 06:31:26.456686] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ca660297e20 00:18:14.167 [2024-07-23 06:31:26.456762] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ca660234f00 00:18:14.167 [2024-07-23 06:31:26.456767] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3ca660234f00 00:18:14.167 [2024-07-23 06:31:26.456802] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.167 NewBaseBdev 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:14.167 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.425 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:14.425 [ 00:18:14.425 { 00:18:14.425 "name": "NewBaseBdev", 00:18:14.425 "aliases": [ 00:18:14.425 "2c2da84b-48bd-11ef-a06c-59ddad71024c" 00:18:14.425 ], 00:18:14.425 "product_name": "Malloc disk", 00:18:14.425 "block_size": 512, 00:18:14.425 "num_blocks": 65536, 00:18:14.425 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:14.425 "assigned_rate_limits": { 00:18:14.425 "rw_ios_per_sec": 0, 00:18:14.425 "rw_mbytes_per_sec": 0, 00:18:14.425 "r_mbytes_per_sec": 0, 00:18:14.425 "w_mbytes_per_sec": 0 00:18:14.425 }, 00:18:14.425 "claimed": true, 00:18:14.425 "claim_type": "exclusive_write", 00:18:14.425 "zoned": false, 00:18:14.425 "supported_io_types": { 00:18:14.425 "read": true, 00:18:14.425 "write": true, 00:18:14.425 "unmap": true, 00:18:14.425 "flush": true, 00:18:14.425 "reset": true, 00:18:14.425 "nvme_admin": false, 00:18:14.425 "nvme_io": false, 00:18:14.425 "nvme_io_md": false, 00:18:14.425 "write_zeroes": true, 00:18:14.425 "zcopy": true, 00:18:14.425 "get_zone_info": false, 00:18:14.425 "zone_management": false, 00:18:14.425 "zone_append": false, 00:18:14.425 "compare": false, 00:18:14.425 "compare_and_write": false, 00:18:14.425 "abort": true, 00:18:14.425 "seek_hole": false, 00:18:14.425 "seek_data": false, 00:18:14.425 "copy": true, 00:18:14.425 "nvme_iov_md": false 00:18:14.425 }, 00:18:14.425 "memory_domains": [ 00:18:14.425 { 00:18:14.425 "dma_device_id": "system", 00:18:14.425 "dma_device_type": 1 00:18:14.425 }, 00:18:14.425 { 00:18:14.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.425 "dma_device_type": 2 00:18:14.425 } 00:18:14.425 ], 00:18:14.425 "driver_specific": {} 00:18:14.425 } 00:18:14.425 ] 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.725 06:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.725 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.725 "name": "Existed_Raid", 00:18:14.725 "uuid": "3016619a-48bd-11ef-a06c-59ddad71024c", 00:18:14.725 "strip_size_kb": 0, 00:18:14.725 "state": "online", 00:18:14.725 "raid_level": "raid1", 00:18:14.725 "superblock": false, 00:18:14.725 "num_base_bdevs": 4, 00:18:14.725 "num_base_bdevs_discovered": 4, 00:18:14.725 "num_base_bdevs_operational": 4, 00:18:14.725 "base_bdevs_list": [ 00:18:14.725 { 00:18:14.725 "name": "NewBaseBdev", 00:18:14.725 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:14.725 "is_configured": true, 00:18:14.725 "data_offset": 0, 00:18:14.725 "data_size": 65536 00:18:14.725 }, 00:18:14.725 { 00:18:14.725 "name": "BaseBdev2", 00:18:14.725 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:14.725 "is_configured": true, 00:18:14.725 "data_offset": 0, 00:18:14.725 "data_size": 65536 00:18:14.725 }, 00:18:14.725 { 00:18:14.725 "name": "BaseBdev3", 00:18:14.725 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:14.725 "is_configured": true, 00:18:14.725 "data_offset": 0, 00:18:14.725 "data_size": 65536 00:18:14.725 }, 00:18:14.725 { 00:18:14.725 "name": "BaseBdev4", 00:18:14.725 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:14.725 "is_configured": true, 00:18:14.725 "data_offset": 0, 00:18:14.725 "data_size": 65536 00:18:14.725 } 00:18:14.725 ] 00:18:14.725 }' 00:18:14.725 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.725 06:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:14.983 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:15.241 [2024-07-23 06:31:27.720551] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.241 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:15.241 "name": "Existed_Raid", 00:18:15.241 "aliases": [ 00:18:15.241 "3016619a-48bd-11ef-a06c-59ddad71024c" 00:18:15.241 ], 00:18:15.241 "product_name": "Raid Volume", 00:18:15.241 "block_size": 512, 00:18:15.241 "num_blocks": 65536, 00:18:15.241 "uuid": "3016619a-48bd-11ef-a06c-59ddad71024c", 00:18:15.241 "assigned_rate_limits": { 00:18:15.241 "rw_ios_per_sec": 0, 00:18:15.241 "rw_mbytes_per_sec": 0, 00:18:15.241 "r_mbytes_per_sec": 0, 00:18:15.241 "w_mbytes_per_sec": 0 00:18:15.241 }, 00:18:15.241 "claimed": false, 00:18:15.241 "zoned": false, 00:18:15.241 "supported_io_types": { 00:18:15.241 "read": true, 00:18:15.241 "write": true, 00:18:15.241 "unmap": false, 00:18:15.241 "flush": false, 00:18:15.241 "reset": true, 00:18:15.241 "nvme_admin": false, 00:18:15.241 "nvme_io": false, 00:18:15.241 "nvme_io_md": false, 00:18:15.241 "write_zeroes": true, 00:18:15.241 "zcopy": false, 00:18:15.241 "get_zone_info": false, 00:18:15.241 "zone_management": false, 00:18:15.241 "zone_append": false, 00:18:15.241 "compare": false, 00:18:15.241 "compare_and_write": false, 00:18:15.241 "abort": false, 00:18:15.241 "seek_hole": false, 00:18:15.241 "seek_data": false, 00:18:15.241 "copy": false, 00:18:15.241 "nvme_iov_md": false 00:18:15.241 }, 00:18:15.241 "memory_domains": [ 00:18:15.241 { 00:18:15.241 "dma_device_id": "system", 00:18:15.241 "dma_device_type": 1 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.241 "dma_device_type": 2 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "system", 00:18:15.241 "dma_device_type": 1 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.241 "dma_device_type": 2 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "system", 00:18:15.241 "dma_device_type": 1 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.241 "dma_device_type": 2 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "system", 00:18:15.241 "dma_device_type": 1 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.241 "dma_device_type": 2 00:18:15.241 } 00:18:15.241 ], 00:18:15.241 "driver_specific": { 00:18:15.241 "raid": { 00:18:15.241 "uuid": "3016619a-48bd-11ef-a06c-59ddad71024c", 00:18:15.241 "strip_size_kb": 0, 00:18:15.241 "state": "online", 00:18:15.241 "raid_level": "raid1", 00:18:15.241 "superblock": false, 00:18:15.241 "num_base_bdevs": 4, 00:18:15.241 "num_base_bdevs_discovered": 4, 00:18:15.241 "num_base_bdevs_operational": 4, 00:18:15.241 "base_bdevs_list": [ 00:18:15.241 { 00:18:15.241 "name": "NewBaseBdev", 00:18:15.241 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:15.241 "is_configured": true, 00:18:15.241 "data_offset": 0, 00:18:15.241 "data_size": 65536 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "name": "BaseBdev2", 00:18:15.241 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:15.241 "is_configured": true, 00:18:15.241 "data_offset": 0, 00:18:15.241 "data_size": 65536 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "name": "BaseBdev3", 00:18:15.241 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:15.241 "is_configured": true, 00:18:15.241 "data_offset": 0, 00:18:15.241 "data_size": 65536 00:18:15.241 }, 00:18:15.241 { 00:18:15.241 "name": "BaseBdev4", 00:18:15.241 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:15.241 "is_configured": true, 00:18:15.241 "data_offset": 0, 00:18:15.241 "data_size": 65536 00:18:15.241 } 00:18:15.241 ] 00:18:15.241 } 00:18:15.241 } 00:18:15.241 }' 00:18:15.241 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.241 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:15.241 BaseBdev2 00:18:15.241 BaseBdev3 00:18:15.241 BaseBdev4' 00:18:15.241 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.242 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:15.242 06:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.807 "name": "NewBaseBdev", 00:18:15.807 "aliases": [ 00:18:15.807 "2c2da84b-48bd-11ef-a06c-59ddad71024c" 00:18:15.807 ], 00:18:15.807 "product_name": "Malloc disk", 00:18:15.807 "block_size": 512, 00:18:15.807 "num_blocks": 65536, 00:18:15.807 "uuid": "2c2da84b-48bd-11ef-a06c-59ddad71024c", 00:18:15.807 "assigned_rate_limits": { 00:18:15.807 "rw_ios_per_sec": 0, 00:18:15.807 "rw_mbytes_per_sec": 0, 00:18:15.807 "r_mbytes_per_sec": 0, 00:18:15.807 "w_mbytes_per_sec": 0 00:18:15.807 }, 00:18:15.807 "claimed": true, 00:18:15.807 "claim_type": "exclusive_write", 00:18:15.807 "zoned": false, 00:18:15.807 "supported_io_types": { 00:18:15.807 "read": true, 00:18:15.807 "write": true, 00:18:15.807 "unmap": true, 00:18:15.807 "flush": true, 00:18:15.807 "reset": true, 00:18:15.807 "nvme_admin": false, 00:18:15.807 "nvme_io": false, 00:18:15.807 "nvme_io_md": false, 00:18:15.807 "write_zeroes": true, 00:18:15.807 "zcopy": true, 00:18:15.807 "get_zone_info": false, 00:18:15.807 "zone_management": false, 00:18:15.807 "zone_append": false, 00:18:15.807 "compare": false, 00:18:15.807 "compare_and_write": false, 00:18:15.807 "abort": true, 00:18:15.807 "seek_hole": false, 00:18:15.807 "seek_data": false, 00:18:15.807 "copy": true, 00:18:15.807 "nvme_iov_md": false 00:18:15.807 }, 00:18:15.807 "memory_domains": [ 00:18:15.807 { 00:18:15.807 "dma_device_id": "system", 00:18:15.807 "dma_device_type": 1 00:18:15.807 }, 00:18:15.807 { 00:18:15.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.807 "dma_device_type": 2 00:18:15.807 } 00:18:15.807 ], 00:18:15.807 "driver_specific": {} 00:18:15.807 }' 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.807 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.807 "name": "BaseBdev2", 00:18:15.807 "aliases": [ 00:18:15.807 "2994ec17-48bd-11ef-a06c-59ddad71024c" 00:18:15.807 ], 00:18:15.807 "product_name": "Malloc disk", 00:18:15.807 "block_size": 512, 00:18:15.807 "num_blocks": 65536, 00:18:15.807 "uuid": "2994ec17-48bd-11ef-a06c-59ddad71024c", 00:18:15.807 "assigned_rate_limits": { 00:18:15.807 "rw_ios_per_sec": 0, 00:18:15.807 "rw_mbytes_per_sec": 0, 00:18:15.807 "r_mbytes_per_sec": 0, 00:18:15.807 "w_mbytes_per_sec": 0 00:18:15.807 }, 00:18:15.807 "claimed": true, 00:18:15.807 "claim_type": "exclusive_write", 00:18:15.807 "zoned": false, 00:18:15.807 "supported_io_types": { 00:18:15.808 "read": true, 00:18:15.808 "write": true, 00:18:15.808 "unmap": true, 00:18:15.808 "flush": true, 00:18:15.808 "reset": true, 00:18:15.808 "nvme_admin": false, 00:18:15.808 "nvme_io": false, 00:18:15.808 "nvme_io_md": false, 00:18:15.808 "write_zeroes": true, 00:18:15.808 "zcopy": true, 00:18:15.808 "get_zone_info": false, 00:18:15.808 "zone_management": false, 00:18:15.808 "zone_append": false, 00:18:15.808 "compare": false, 00:18:15.808 "compare_and_write": false, 00:18:15.808 "abort": true, 00:18:15.808 "seek_hole": false, 00:18:15.808 "seek_data": false, 00:18:15.808 "copy": true, 00:18:15.808 "nvme_iov_md": false 00:18:15.808 }, 00:18:15.808 "memory_domains": [ 00:18:15.808 { 00:18:15.808 "dma_device_id": "system", 00:18:15.808 "dma_device_type": 1 00:18:15.808 }, 00:18:15.808 { 00:18:15.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.808 "dma_device_type": 2 00:18:15.808 } 00:18:15.808 ], 00:18:15.808 "driver_specific": {} 00:18:15.808 }' 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.066 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.324 "name": "BaseBdev3", 00:18:16.324 "aliases": [ 00:18:16.324 "29fe8615-48bd-11ef-a06c-59ddad71024c" 00:18:16.324 ], 00:18:16.324 "product_name": "Malloc disk", 00:18:16.324 "block_size": 512, 00:18:16.324 "num_blocks": 65536, 00:18:16.324 "uuid": "29fe8615-48bd-11ef-a06c-59ddad71024c", 00:18:16.324 "assigned_rate_limits": { 00:18:16.324 "rw_ios_per_sec": 0, 00:18:16.324 "rw_mbytes_per_sec": 0, 00:18:16.324 "r_mbytes_per_sec": 0, 00:18:16.324 "w_mbytes_per_sec": 0 00:18:16.324 }, 00:18:16.324 "claimed": true, 00:18:16.324 "claim_type": "exclusive_write", 00:18:16.324 "zoned": false, 00:18:16.324 "supported_io_types": { 00:18:16.324 "read": true, 00:18:16.324 "write": true, 00:18:16.324 "unmap": true, 00:18:16.324 "flush": true, 00:18:16.324 "reset": true, 00:18:16.324 "nvme_admin": false, 00:18:16.324 "nvme_io": false, 00:18:16.324 "nvme_io_md": false, 00:18:16.324 "write_zeroes": true, 00:18:16.324 "zcopy": true, 00:18:16.324 "get_zone_info": false, 00:18:16.324 "zone_management": false, 00:18:16.324 "zone_append": false, 00:18:16.324 "compare": false, 00:18:16.324 "compare_and_write": false, 00:18:16.324 "abort": true, 00:18:16.324 "seek_hole": false, 00:18:16.324 "seek_data": false, 00:18:16.324 "copy": true, 00:18:16.324 "nvme_iov_md": false 00:18:16.324 }, 00:18:16.324 "memory_domains": [ 00:18:16.324 { 00:18:16.324 "dma_device_id": "system", 00:18:16.324 "dma_device_type": 1 00:18:16.324 }, 00:18:16.324 { 00:18:16.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.324 "dma_device_type": 2 00:18:16.324 } 00:18:16.324 ], 00:18:16.324 "driver_specific": {} 00:18:16.324 }' 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:16.324 06:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.582 "name": "BaseBdev4", 00:18:16.582 "aliases": [ 00:18:16.582 "2a7d7cbf-48bd-11ef-a06c-59ddad71024c" 00:18:16.582 ], 00:18:16.582 "product_name": "Malloc disk", 00:18:16.582 "block_size": 512, 00:18:16.582 "num_blocks": 65536, 00:18:16.582 "uuid": "2a7d7cbf-48bd-11ef-a06c-59ddad71024c", 00:18:16.582 "assigned_rate_limits": { 00:18:16.582 "rw_ios_per_sec": 0, 00:18:16.582 "rw_mbytes_per_sec": 0, 00:18:16.582 "r_mbytes_per_sec": 0, 00:18:16.582 "w_mbytes_per_sec": 0 00:18:16.582 }, 00:18:16.582 "claimed": true, 00:18:16.582 "claim_type": "exclusive_write", 00:18:16.582 "zoned": false, 00:18:16.582 "supported_io_types": { 00:18:16.582 "read": true, 00:18:16.582 "write": true, 00:18:16.582 "unmap": true, 00:18:16.582 "flush": true, 00:18:16.582 "reset": true, 00:18:16.582 "nvme_admin": false, 00:18:16.582 "nvme_io": false, 00:18:16.582 "nvme_io_md": false, 00:18:16.582 "write_zeroes": true, 00:18:16.582 "zcopy": true, 00:18:16.582 "get_zone_info": false, 00:18:16.582 "zone_management": false, 00:18:16.582 "zone_append": false, 00:18:16.582 "compare": false, 00:18:16.582 "compare_and_write": false, 00:18:16.582 "abort": true, 00:18:16.582 "seek_hole": false, 00:18:16.582 "seek_data": false, 00:18:16.582 "copy": true, 00:18:16.582 "nvme_iov_md": false 00:18:16.582 }, 00:18:16.582 "memory_domains": [ 00:18:16.582 { 00:18:16.582 "dma_device_id": "system", 00:18:16.582 "dma_device_type": 1 00:18:16.582 }, 00:18:16.582 { 00:18:16.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.582 "dma_device_type": 2 00:18:16.582 } 00:18:16.582 ], 00:18:16.582 "driver_specific": {} 00:18:16.582 }' 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.582 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.841 [2024-07-23 06:31:29.332543] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.841 [2024-07-23 06:31:29.332571] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.841 [2024-07-23 06:31:29.332595] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.841 [2024-07-23 06:31:29.332666] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.841 [2024-07-23 06:31:29.332671] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ca660234f00 name Existed_Raid, state offline 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 63063 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 63063 ']' 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 63063 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 63063 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63063' 00:18:16.841 killing process with pid 63063 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 63063 00:18:16.841 [2024-07-23 06:31:29.362394] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.841 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 63063 00:18:17.100 [2024-07-23 06:31:29.386099] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:17.100 00:18:17.100 real 0m28.184s 00:18:17.100 user 0m51.748s 00:18:17.100 sys 0m3.746s 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.100 ************************************ 00:18:17.100 END TEST raid_state_function_test 00:18:17.100 ************************************ 00:18:17.100 06:31:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:17.100 06:31:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:17.100 06:31:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:17.100 06:31:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.100 06:31:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.100 ************************************ 00:18:17.100 START TEST raid_state_function_test_sb 00:18:17.100 ************************************ 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63886 00:18:17.100 Process raid pid: 63886 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63886' 00:18:17.100 06:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63886 /var/tmp/spdk-raid.sock 00:18:17.358 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63886 ']' 00:18:17.358 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:17.358 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:17.359 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:17.359 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.359 06:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.359 [2024-07-23 06:31:29.630607] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:17.359 [2024-07-23 06:31:29.630839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:17.930 EAL: TSC is not safe to use in SMP mode 00:18:17.930 EAL: TSC is not invariant 00:18:17.931 [2024-07-23 06:31:30.167832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.931 [2024-07-23 06:31:30.263948] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:17.931 [2024-07-23 06:31:30.266418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.931 [2024-07-23 06:31:30.267354] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.931 [2024-07-23 06:31:30.267374] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.189 06:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.189 06:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:18.189 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:18.447 [2024-07-23 06:31:30.944803] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.447 [2024-07-23 06:31:30.944858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.447 [2024-07-23 06:31:30.944863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.447 [2024-07-23 06:31:30.944872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.447 [2024-07-23 06:31:30.944876] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.447 [2024-07-23 06:31:30.944883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.447 [2024-07-23 06:31:30.944887] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.447 [2024-07-23 06:31:30.944902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.447 06:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.705 06:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.705 "name": "Existed_Raid", 00:18:18.705 "uuid": "32c33718-48bd-11ef-a06c-59ddad71024c", 00:18:18.705 "strip_size_kb": 0, 00:18:18.705 "state": "configuring", 00:18:18.705 "raid_level": "raid1", 00:18:18.705 "superblock": true, 00:18:18.705 "num_base_bdevs": 4, 00:18:18.705 "num_base_bdevs_discovered": 0, 00:18:18.705 "num_base_bdevs_operational": 4, 00:18:18.705 "base_bdevs_list": [ 00:18:18.705 { 00:18:18.705 "name": "BaseBdev1", 00:18:18.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.705 "is_configured": false, 00:18:18.705 "data_offset": 0, 00:18:18.705 "data_size": 0 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "name": "BaseBdev2", 00:18:18.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.705 "is_configured": false, 00:18:18.705 "data_offset": 0, 00:18:18.705 "data_size": 0 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "name": "BaseBdev3", 00:18:18.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.706 "is_configured": false, 00:18:18.706 "data_offset": 0, 00:18:18.706 "data_size": 0 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "name": "BaseBdev4", 00:18:18.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.706 "is_configured": false, 00:18:18.706 "data_offset": 0, 00:18:18.706 "data_size": 0 00:18:18.706 } 00:18:18.706 ] 00:18:18.706 }' 00:18:18.706 06:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.706 06:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.272 06:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:19.530 [2024-07-23 06:31:31.804806] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.530 [2024-07-23 06:31:31.804838] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e3cba834500 name Existed_Raid, state configuring 00:18:19.530 06:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:19.530 [2024-07-23 06:31:32.048822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.530 [2024-07-23 06:31:32.048880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.530 [2024-07-23 06:31:32.048886] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.530 [2024-07-23 06:31:32.048894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.530 [2024-07-23 06:31:32.048898] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:19.530 [2024-07-23 06:31:32.048905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:19.530 [2024-07-23 06:31:32.048909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:19.530 [2024-07-23 06:31:32.048916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.788 [2024-07-23 06:31:32.289870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.788 BaseBdev1 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:19.788 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.046 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.304 [ 00:18:20.304 { 00:18:20.304 "name": "BaseBdev1", 00:18:20.304 "aliases": [ 00:18:20.304 "33904c7d-48bd-11ef-a06c-59ddad71024c" 00:18:20.304 ], 00:18:20.304 "product_name": "Malloc disk", 00:18:20.304 "block_size": 512, 00:18:20.304 "num_blocks": 65536, 00:18:20.304 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:20.304 "assigned_rate_limits": { 00:18:20.304 "rw_ios_per_sec": 0, 00:18:20.304 "rw_mbytes_per_sec": 0, 00:18:20.304 "r_mbytes_per_sec": 0, 00:18:20.304 "w_mbytes_per_sec": 0 00:18:20.304 }, 00:18:20.304 "claimed": true, 00:18:20.304 "claim_type": "exclusive_write", 00:18:20.304 "zoned": false, 00:18:20.304 "supported_io_types": { 00:18:20.304 "read": true, 00:18:20.304 "write": true, 00:18:20.304 "unmap": true, 00:18:20.304 "flush": true, 00:18:20.304 "reset": true, 00:18:20.304 "nvme_admin": false, 00:18:20.304 "nvme_io": false, 00:18:20.304 "nvme_io_md": false, 00:18:20.304 "write_zeroes": true, 00:18:20.304 "zcopy": true, 00:18:20.304 "get_zone_info": false, 00:18:20.304 "zone_management": false, 00:18:20.304 "zone_append": false, 00:18:20.304 "compare": false, 00:18:20.304 "compare_and_write": false, 00:18:20.304 "abort": true, 00:18:20.304 "seek_hole": false, 00:18:20.304 "seek_data": false, 00:18:20.304 "copy": true, 00:18:20.304 "nvme_iov_md": false 00:18:20.304 }, 00:18:20.304 "memory_domains": [ 00:18:20.304 { 00:18:20.304 "dma_device_id": "system", 00:18:20.304 "dma_device_type": 1 00:18:20.304 }, 00:18:20.304 { 00:18:20.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.304 "dma_device_type": 2 00:18:20.304 } 00:18:20.304 ], 00:18:20.304 "driver_specific": {} 00:18:20.304 } 00:18:20.304 ] 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.304 06:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.562 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.562 "name": "Existed_Raid", 00:18:20.562 "uuid": "336bacdc-48bd-11ef-a06c-59ddad71024c", 00:18:20.562 "strip_size_kb": 0, 00:18:20.562 "state": "configuring", 00:18:20.562 "raid_level": "raid1", 00:18:20.562 "superblock": true, 00:18:20.562 "num_base_bdevs": 4, 00:18:20.562 "num_base_bdevs_discovered": 1, 00:18:20.562 "num_base_bdevs_operational": 4, 00:18:20.562 "base_bdevs_list": [ 00:18:20.562 { 00:18:20.562 "name": "BaseBdev1", 00:18:20.562 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:20.562 "is_configured": true, 00:18:20.562 "data_offset": 2048, 00:18:20.562 "data_size": 63488 00:18:20.562 }, 00:18:20.562 { 00:18:20.562 "name": "BaseBdev2", 00:18:20.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.562 "is_configured": false, 00:18:20.562 "data_offset": 0, 00:18:20.562 "data_size": 0 00:18:20.562 }, 00:18:20.562 { 00:18:20.562 "name": "BaseBdev3", 00:18:20.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.562 "is_configured": false, 00:18:20.562 "data_offset": 0, 00:18:20.562 "data_size": 0 00:18:20.562 }, 00:18:20.562 { 00:18:20.562 "name": "BaseBdev4", 00:18:20.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.562 "is_configured": false, 00:18:20.562 "data_offset": 0, 00:18:20.562 "data_size": 0 00:18:20.562 } 00:18:20.562 ] 00:18:20.562 }' 00:18:20.562 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.562 06:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.127 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:21.127 [2024-07-23 06:31:33.584843] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.127 [2024-07-23 06:31:33.584879] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e3cba834500 name Existed_Raid, state configuring 00:18:21.127 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:21.386 [2024-07-23 06:31:33.880862] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.386 [2024-07-23 06:31:33.881656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.386 [2024-07-23 06:31:33.881695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.386 [2024-07-23 06:31:33.881700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.386 [2024-07-23 06:31:33.881709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.386 [2024-07-23 06:31:33.881713] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:21.386 [2024-07-23 06:31:33.881721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.386 06:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.953 06:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.953 "name": "Existed_Raid", 00:18:21.953 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:21.953 "strip_size_kb": 0, 00:18:21.953 "state": "configuring", 00:18:21.953 "raid_level": "raid1", 00:18:21.953 "superblock": true, 00:18:21.953 "num_base_bdevs": 4, 00:18:21.953 "num_base_bdevs_discovered": 1, 00:18:21.953 "num_base_bdevs_operational": 4, 00:18:21.953 "base_bdevs_list": [ 00:18:21.953 { 00:18:21.953 "name": "BaseBdev1", 00:18:21.953 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:21.953 "is_configured": true, 00:18:21.953 "data_offset": 2048, 00:18:21.953 "data_size": 63488 00:18:21.953 }, 00:18:21.953 { 00:18:21.953 "name": "BaseBdev2", 00:18:21.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.953 "is_configured": false, 00:18:21.953 "data_offset": 0, 00:18:21.953 "data_size": 0 00:18:21.953 }, 00:18:21.953 { 00:18:21.953 "name": "BaseBdev3", 00:18:21.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.953 "is_configured": false, 00:18:21.953 "data_offset": 0, 00:18:21.953 "data_size": 0 00:18:21.953 }, 00:18:21.953 { 00:18:21.953 "name": "BaseBdev4", 00:18:21.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.953 "is_configured": false, 00:18:21.953 "data_offset": 0, 00:18:21.953 "data_size": 0 00:18:21.953 } 00:18:21.953 ] 00:18:21.953 }' 00:18:21.953 06:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.953 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.211 06:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:22.468 [2024-07-23 06:31:34.765028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.468 BaseBdev2 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.468 06:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.726 06:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.984 [ 00:18:22.984 { 00:18:22.984 "name": "BaseBdev2", 00:18:22.984 "aliases": [ 00:18:22.984 "350a1d71-48bd-11ef-a06c-59ddad71024c" 00:18:22.984 ], 00:18:22.984 "product_name": "Malloc disk", 00:18:22.984 "block_size": 512, 00:18:22.984 "num_blocks": 65536, 00:18:22.984 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:22.984 "assigned_rate_limits": { 00:18:22.984 "rw_ios_per_sec": 0, 00:18:22.984 "rw_mbytes_per_sec": 0, 00:18:22.984 "r_mbytes_per_sec": 0, 00:18:22.984 "w_mbytes_per_sec": 0 00:18:22.984 }, 00:18:22.984 "claimed": true, 00:18:22.984 "claim_type": "exclusive_write", 00:18:22.984 "zoned": false, 00:18:22.984 "supported_io_types": { 00:18:22.984 "read": true, 00:18:22.984 "write": true, 00:18:22.984 "unmap": true, 00:18:22.984 "flush": true, 00:18:22.984 "reset": true, 00:18:22.984 "nvme_admin": false, 00:18:22.984 "nvme_io": false, 00:18:22.984 "nvme_io_md": false, 00:18:22.984 "write_zeroes": true, 00:18:22.984 "zcopy": true, 00:18:22.984 "get_zone_info": false, 00:18:22.984 "zone_management": false, 00:18:22.984 "zone_append": false, 00:18:22.984 "compare": false, 00:18:22.984 "compare_and_write": false, 00:18:22.984 "abort": true, 00:18:22.984 "seek_hole": false, 00:18:22.984 "seek_data": false, 00:18:22.984 "copy": true, 00:18:22.984 "nvme_iov_md": false 00:18:22.984 }, 00:18:22.984 "memory_domains": [ 00:18:22.984 { 00:18:22.984 "dma_device_id": "system", 00:18:22.984 "dma_device_type": 1 00:18:22.984 }, 00:18:22.984 { 00:18:22.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.984 "dma_device_type": 2 00:18:22.984 } 00:18:22.984 ], 00:18:22.984 "driver_specific": {} 00:18:22.984 } 00:18:22.984 ] 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.984 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.243 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.243 "name": "Existed_Raid", 00:18:23.243 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:23.243 "strip_size_kb": 0, 00:18:23.243 "state": "configuring", 00:18:23.243 "raid_level": "raid1", 00:18:23.243 "superblock": true, 00:18:23.243 "num_base_bdevs": 4, 00:18:23.243 "num_base_bdevs_discovered": 2, 00:18:23.243 "num_base_bdevs_operational": 4, 00:18:23.243 "base_bdevs_list": [ 00:18:23.243 { 00:18:23.243 "name": "BaseBdev1", 00:18:23.243 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:23.243 "is_configured": true, 00:18:23.243 "data_offset": 2048, 00:18:23.243 "data_size": 63488 00:18:23.243 }, 00:18:23.243 { 00:18:23.243 "name": "BaseBdev2", 00:18:23.243 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:23.243 "is_configured": true, 00:18:23.243 "data_offset": 2048, 00:18:23.243 "data_size": 63488 00:18:23.243 }, 00:18:23.243 { 00:18:23.243 "name": "BaseBdev3", 00:18:23.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.243 "is_configured": false, 00:18:23.243 "data_offset": 0, 00:18:23.243 "data_size": 0 00:18:23.243 }, 00:18:23.243 { 00:18:23.243 "name": "BaseBdev4", 00:18:23.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.243 "is_configured": false, 00:18:23.243 "data_offset": 0, 00:18:23.243 "data_size": 0 00:18:23.243 } 00:18:23.243 ] 00:18:23.243 }' 00:18:23.243 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.243 06:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.501 06:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:23.759 [2024-07-23 06:31:36.173053] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.759 BaseBdev3 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:23.759 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.017 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:24.274 [ 00:18:24.274 { 00:18:24.274 "name": "BaseBdev3", 00:18:24.274 "aliases": [ 00:18:24.274 "35e0f744-48bd-11ef-a06c-59ddad71024c" 00:18:24.274 ], 00:18:24.274 "product_name": "Malloc disk", 00:18:24.274 "block_size": 512, 00:18:24.274 "num_blocks": 65536, 00:18:24.274 "uuid": "35e0f744-48bd-11ef-a06c-59ddad71024c", 00:18:24.274 "assigned_rate_limits": { 00:18:24.274 "rw_ios_per_sec": 0, 00:18:24.274 "rw_mbytes_per_sec": 0, 00:18:24.274 "r_mbytes_per_sec": 0, 00:18:24.274 "w_mbytes_per_sec": 0 00:18:24.274 }, 00:18:24.274 "claimed": true, 00:18:24.274 "claim_type": "exclusive_write", 00:18:24.274 "zoned": false, 00:18:24.274 "supported_io_types": { 00:18:24.274 "read": true, 00:18:24.274 "write": true, 00:18:24.274 "unmap": true, 00:18:24.274 "flush": true, 00:18:24.274 "reset": true, 00:18:24.274 "nvme_admin": false, 00:18:24.274 "nvme_io": false, 00:18:24.274 "nvme_io_md": false, 00:18:24.274 "write_zeroes": true, 00:18:24.274 "zcopy": true, 00:18:24.274 "get_zone_info": false, 00:18:24.274 "zone_management": false, 00:18:24.274 "zone_append": false, 00:18:24.274 "compare": false, 00:18:24.274 "compare_and_write": false, 00:18:24.274 "abort": true, 00:18:24.274 "seek_hole": false, 00:18:24.274 "seek_data": false, 00:18:24.274 "copy": true, 00:18:24.274 "nvme_iov_md": false 00:18:24.274 }, 00:18:24.274 "memory_domains": [ 00:18:24.274 { 00:18:24.274 "dma_device_id": "system", 00:18:24.274 "dma_device_type": 1 00:18:24.274 }, 00:18:24.274 { 00:18:24.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.274 "dma_device_type": 2 00:18:24.274 } 00:18:24.275 ], 00:18:24.275 "driver_specific": {} 00:18:24.275 } 00:18:24.275 ] 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.275 06:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.531 06:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.531 "name": "Existed_Raid", 00:18:24.531 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:24.531 "strip_size_kb": 0, 00:18:24.531 "state": "configuring", 00:18:24.531 "raid_level": "raid1", 00:18:24.531 "superblock": true, 00:18:24.531 "num_base_bdevs": 4, 00:18:24.531 "num_base_bdevs_discovered": 3, 00:18:24.531 "num_base_bdevs_operational": 4, 00:18:24.531 "base_bdevs_list": [ 00:18:24.531 { 00:18:24.531 "name": "BaseBdev1", 00:18:24.531 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:24.531 "is_configured": true, 00:18:24.531 "data_offset": 2048, 00:18:24.531 "data_size": 63488 00:18:24.531 }, 00:18:24.531 { 00:18:24.531 "name": "BaseBdev2", 00:18:24.531 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:24.531 "is_configured": true, 00:18:24.531 "data_offset": 2048, 00:18:24.531 "data_size": 63488 00:18:24.531 }, 00:18:24.531 { 00:18:24.531 "name": "BaseBdev3", 00:18:24.531 "uuid": "35e0f744-48bd-11ef-a06c-59ddad71024c", 00:18:24.531 "is_configured": true, 00:18:24.531 "data_offset": 2048, 00:18:24.531 "data_size": 63488 00:18:24.531 }, 00:18:24.531 { 00:18:24.531 "name": "BaseBdev4", 00:18:24.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.531 "is_configured": false, 00:18:24.531 "data_offset": 0, 00:18:24.531 "data_size": 0 00:18:24.531 } 00:18:24.531 ] 00:18:24.531 }' 00:18:24.531 06:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.531 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.789 06:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:25.080 [2024-07-23 06:31:37.521075] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:25.080 [2024-07-23 06:31:37.521150] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e3cba834a00 00:18:25.080 [2024-07-23 06:31:37.521156] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:25.080 [2024-07-23 06:31:37.521179] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e3cba897e20 00:18:25.080 [2024-07-23 06:31:37.521238] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e3cba834a00 00:18:25.080 [2024-07-23 06:31:37.521243] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e3cba834a00 00:18:25.080 [2024-07-23 06:31:37.521264] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.080 BaseBdev4 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:25.080 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.339 06:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:25.596 [ 00:18:25.596 { 00:18:25.596 "name": "BaseBdev4", 00:18:25.596 "aliases": [ 00:18:25.596 "36aea84d-48bd-11ef-a06c-59ddad71024c" 00:18:25.596 ], 00:18:25.596 "product_name": "Malloc disk", 00:18:25.596 "block_size": 512, 00:18:25.596 "num_blocks": 65536, 00:18:25.596 "uuid": "36aea84d-48bd-11ef-a06c-59ddad71024c", 00:18:25.596 "assigned_rate_limits": { 00:18:25.596 "rw_ios_per_sec": 0, 00:18:25.596 "rw_mbytes_per_sec": 0, 00:18:25.596 "r_mbytes_per_sec": 0, 00:18:25.596 "w_mbytes_per_sec": 0 00:18:25.596 }, 00:18:25.596 "claimed": true, 00:18:25.596 "claim_type": "exclusive_write", 00:18:25.596 "zoned": false, 00:18:25.596 "supported_io_types": { 00:18:25.596 "read": true, 00:18:25.596 "write": true, 00:18:25.596 "unmap": true, 00:18:25.596 "flush": true, 00:18:25.596 "reset": true, 00:18:25.596 "nvme_admin": false, 00:18:25.596 "nvme_io": false, 00:18:25.596 "nvme_io_md": false, 00:18:25.596 "write_zeroes": true, 00:18:25.596 "zcopy": true, 00:18:25.596 "get_zone_info": false, 00:18:25.596 "zone_management": false, 00:18:25.596 "zone_append": false, 00:18:25.596 "compare": false, 00:18:25.596 "compare_and_write": false, 00:18:25.596 "abort": true, 00:18:25.596 "seek_hole": false, 00:18:25.596 "seek_data": false, 00:18:25.596 "copy": true, 00:18:25.596 "nvme_iov_md": false 00:18:25.596 }, 00:18:25.596 "memory_domains": [ 00:18:25.596 { 00:18:25.596 "dma_device_id": "system", 00:18:25.596 "dma_device_type": 1 00:18:25.596 }, 00:18:25.596 { 00:18:25.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.596 "dma_device_type": 2 00:18:25.596 } 00:18:25.596 ], 00:18:25.596 "driver_specific": {} 00:18:25.596 } 00:18:25.596 ] 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.596 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.597 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.597 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.597 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.597 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.597 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.597 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.854 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.854 "name": "Existed_Raid", 00:18:25.854 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:25.854 "strip_size_kb": 0, 00:18:25.854 "state": "online", 00:18:25.854 "raid_level": "raid1", 00:18:25.854 "superblock": true, 00:18:25.854 "num_base_bdevs": 4, 00:18:25.854 "num_base_bdevs_discovered": 4, 00:18:25.854 "num_base_bdevs_operational": 4, 00:18:25.854 "base_bdevs_list": [ 00:18:25.854 { 00:18:25.854 "name": "BaseBdev1", 00:18:25.854 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:25.854 "is_configured": true, 00:18:25.854 "data_offset": 2048, 00:18:25.854 "data_size": 63488 00:18:25.854 }, 00:18:25.854 { 00:18:25.854 "name": "BaseBdev2", 00:18:25.854 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:25.854 "is_configured": true, 00:18:25.854 "data_offset": 2048, 00:18:25.854 "data_size": 63488 00:18:25.854 }, 00:18:25.854 { 00:18:25.854 "name": "BaseBdev3", 00:18:25.854 "uuid": "35e0f744-48bd-11ef-a06c-59ddad71024c", 00:18:25.854 "is_configured": true, 00:18:25.854 "data_offset": 2048, 00:18:25.854 "data_size": 63488 00:18:25.854 }, 00:18:25.854 { 00:18:25.854 "name": "BaseBdev4", 00:18:25.854 "uuid": "36aea84d-48bd-11ef-a06c-59ddad71024c", 00:18:25.854 "is_configured": true, 00:18:25.854 "data_offset": 2048, 00:18:25.854 "data_size": 63488 00:18:25.854 } 00:18:25.854 ] 00:18:25.854 }' 00:18:25.854 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.854 06:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:26.421 06:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:26.679 [2024-07-23 06:31:39.129078] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.679 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:26.679 "name": "Existed_Raid", 00:18:26.679 "aliases": [ 00:18:26.679 "348338c6-48bd-11ef-a06c-59ddad71024c" 00:18:26.679 ], 00:18:26.679 "product_name": "Raid Volume", 00:18:26.679 "block_size": 512, 00:18:26.679 "num_blocks": 63488, 00:18:26.679 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:26.679 "assigned_rate_limits": { 00:18:26.679 "rw_ios_per_sec": 0, 00:18:26.679 "rw_mbytes_per_sec": 0, 00:18:26.679 "r_mbytes_per_sec": 0, 00:18:26.679 "w_mbytes_per_sec": 0 00:18:26.679 }, 00:18:26.679 "claimed": false, 00:18:26.679 "zoned": false, 00:18:26.679 "supported_io_types": { 00:18:26.679 "read": true, 00:18:26.679 "write": true, 00:18:26.679 "unmap": false, 00:18:26.679 "flush": false, 00:18:26.679 "reset": true, 00:18:26.679 "nvme_admin": false, 00:18:26.679 "nvme_io": false, 00:18:26.679 "nvme_io_md": false, 00:18:26.679 "write_zeroes": true, 00:18:26.679 "zcopy": false, 00:18:26.679 "get_zone_info": false, 00:18:26.679 "zone_management": false, 00:18:26.679 "zone_append": false, 00:18:26.679 "compare": false, 00:18:26.679 "compare_and_write": false, 00:18:26.679 "abort": false, 00:18:26.679 "seek_hole": false, 00:18:26.679 "seek_data": false, 00:18:26.679 "copy": false, 00:18:26.679 "nvme_iov_md": false 00:18:26.679 }, 00:18:26.679 "memory_domains": [ 00:18:26.679 { 00:18:26.679 "dma_device_id": "system", 00:18:26.679 "dma_device_type": 1 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.679 "dma_device_type": 2 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "system", 00:18:26.679 "dma_device_type": 1 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.679 "dma_device_type": 2 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "system", 00:18:26.679 "dma_device_type": 1 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.679 "dma_device_type": 2 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "system", 00:18:26.679 "dma_device_type": 1 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.679 "dma_device_type": 2 00:18:26.679 } 00:18:26.679 ], 00:18:26.679 "driver_specific": { 00:18:26.679 "raid": { 00:18:26.679 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:26.679 "strip_size_kb": 0, 00:18:26.679 "state": "online", 00:18:26.679 "raid_level": "raid1", 00:18:26.679 "superblock": true, 00:18:26.679 "num_base_bdevs": 4, 00:18:26.679 "num_base_bdevs_discovered": 4, 00:18:26.679 "num_base_bdevs_operational": 4, 00:18:26.679 "base_bdevs_list": [ 00:18:26.679 { 00:18:26.679 "name": "BaseBdev1", 00:18:26.679 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:26.679 "is_configured": true, 00:18:26.679 "data_offset": 2048, 00:18:26.679 "data_size": 63488 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "name": "BaseBdev2", 00:18:26.679 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:26.679 "is_configured": true, 00:18:26.679 "data_offset": 2048, 00:18:26.679 "data_size": 63488 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "name": "BaseBdev3", 00:18:26.679 "uuid": "35e0f744-48bd-11ef-a06c-59ddad71024c", 00:18:26.679 "is_configured": true, 00:18:26.679 "data_offset": 2048, 00:18:26.679 "data_size": 63488 00:18:26.679 }, 00:18:26.679 { 00:18:26.679 "name": "BaseBdev4", 00:18:26.679 "uuid": "36aea84d-48bd-11ef-a06c-59ddad71024c", 00:18:26.679 "is_configured": true, 00:18:26.679 "data_offset": 2048, 00:18:26.679 "data_size": 63488 00:18:26.679 } 00:18:26.679 ] 00:18:26.679 } 00:18:26.679 } 00:18:26.679 }' 00:18:26.679 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.679 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:26.679 BaseBdev2 00:18:26.679 BaseBdev3 00:18:26.679 BaseBdev4' 00:18:26.679 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:26.679 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:26.679 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.244 "name": "BaseBdev1", 00:18:27.244 "aliases": [ 00:18:27.244 "33904c7d-48bd-11ef-a06c-59ddad71024c" 00:18:27.244 ], 00:18:27.244 "product_name": "Malloc disk", 00:18:27.244 "block_size": 512, 00:18:27.244 "num_blocks": 65536, 00:18:27.244 "uuid": "33904c7d-48bd-11ef-a06c-59ddad71024c", 00:18:27.244 "assigned_rate_limits": { 00:18:27.244 "rw_ios_per_sec": 0, 00:18:27.244 "rw_mbytes_per_sec": 0, 00:18:27.244 "r_mbytes_per_sec": 0, 00:18:27.244 "w_mbytes_per_sec": 0 00:18:27.244 }, 00:18:27.244 "claimed": true, 00:18:27.244 "claim_type": "exclusive_write", 00:18:27.244 "zoned": false, 00:18:27.244 "supported_io_types": { 00:18:27.244 "read": true, 00:18:27.244 "write": true, 00:18:27.244 "unmap": true, 00:18:27.244 "flush": true, 00:18:27.244 "reset": true, 00:18:27.244 "nvme_admin": false, 00:18:27.244 "nvme_io": false, 00:18:27.244 "nvme_io_md": false, 00:18:27.244 "write_zeroes": true, 00:18:27.244 "zcopy": true, 00:18:27.244 "get_zone_info": false, 00:18:27.244 "zone_management": false, 00:18:27.244 "zone_append": false, 00:18:27.244 "compare": false, 00:18:27.244 "compare_and_write": false, 00:18:27.244 "abort": true, 00:18:27.244 "seek_hole": false, 00:18:27.244 "seek_data": false, 00:18:27.244 "copy": true, 00:18:27.244 "nvme_iov_md": false 00:18:27.244 }, 00:18:27.244 "memory_domains": [ 00:18:27.244 { 00:18:27.244 "dma_device_id": "system", 00:18:27.244 "dma_device_type": 1 00:18:27.244 }, 00:18:27.244 { 00:18:27.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.244 "dma_device_type": 2 00:18:27.244 } 00:18:27.244 ], 00:18:27.244 "driver_specific": {} 00:18:27.244 }' 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:27.244 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.503 "name": "BaseBdev2", 00:18:27.503 "aliases": [ 00:18:27.503 "350a1d71-48bd-11ef-a06c-59ddad71024c" 00:18:27.503 ], 00:18:27.503 "product_name": "Malloc disk", 00:18:27.503 "block_size": 512, 00:18:27.503 "num_blocks": 65536, 00:18:27.503 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:27.503 "assigned_rate_limits": { 00:18:27.503 "rw_ios_per_sec": 0, 00:18:27.503 "rw_mbytes_per_sec": 0, 00:18:27.503 "r_mbytes_per_sec": 0, 00:18:27.503 "w_mbytes_per_sec": 0 00:18:27.503 }, 00:18:27.503 "claimed": true, 00:18:27.503 "claim_type": "exclusive_write", 00:18:27.503 "zoned": false, 00:18:27.503 "supported_io_types": { 00:18:27.503 "read": true, 00:18:27.503 "write": true, 00:18:27.503 "unmap": true, 00:18:27.503 "flush": true, 00:18:27.503 "reset": true, 00:18:27.503 "nvme_admin": false, 00:18:27.503 "nvme_io": false, 00:18:27.503 "nvme_io_md": false, 00:18:27.503 "write_zeroes": true, 00:18:27.503 "zcopy": true, 00:18:27.503 "get_zone_info": false, 00:18:27.503 "zone_management": false, 00:18:27.503 "zone_append": false, 00:18:27.503 "compare": false, 00:18:27.503 "compare_and_write": false, 00:18:27.503 "abort": true, 00:18:27.503 "seek_hole": false, 00:18:27.503 "seek_data": false, 00:18:27.503 "copy": true, 00:18:27.503 "nvme_iov_md": false 00:18:27.503 }, 00:18:27.503 "memory_domains": [ 00:18:27.503 { 00:18:27.503 "dma_device_id": "system", 00:18:27.503 "dma_device_type": 1 00:18:27.503 }, 00:18:27.503 { 00:18:27.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.503 "dma_device_type": 2 00:18:27.503 } 00:18:27.503 ], 00:18:27.503 "driver_specific": {} 00:18:27.503 }' 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:27.503 06:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.760 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.760 "name": "BaseBdev3", 00:18:27.760 "aliases": [ 00:18:27.760 "35e0f744-48bd-11ef-a06c-59ddad71024c" 00:18:27.760 ], 00:18:27.760 "product_name": "Malloc disk", 00:18:27.760 "block_size": 512, 00:18:27.760 "num_blocks": 65536, 00:18:27.760 "uuid": "35e0f744-48bd-11ef-a06c-59ddad71024c", 00:18:27.760 "assigned_rate_limits": { 00:18:27.760 "rw_ios_per_sec": 0, 00:18:27.760 "rw_mbytes_per_sec": 0, 00:18:27.760 "r_mbytes_per_sec": 0, 00:18:27.760 "w_mbytes_per_sec": 0 00:18:27.760 }, 00:18:27.760 "claimed": true, 00:18:27.760 "claim_type": "exclusive_write", 00:18:27.760 "zoned": false, 00:18:27.760 "supported_io_types": { 00:18:27.760 "read": true, 00:18:27.760 "write": true, 00:18:27.760 "unmap": true, 00:18:27.760 "flush": true, 00:18:27.760 "reset": true, 00:18:27.760 "nvme_admin": false, 00:18:27.760 "nvme_io": false, 00:18:27.760 "nvme_io_md": false, 00:18:27.760 "write_zeroes": true, 00:18:27.760 "zcopy": true, 00:18:27.760 "get_zone_info": false, 00:18:27.760 "zone_management": false, 00:18:27.760 "zone_append": false, 00:18:27.760 "compare": false, 00:18:27.760 "compare_and_write": false, 00:18:27.760 "abort": true, 00:18:27.760 "seek_hole": false, 00:18:27.760 "seek_data": false, 00:18:27.760 "copy": true, 00:18:27.760 "nvme_iov_md": false 00:18:27.760 }, 00:18:27.760 "memory_domains": [ 00:18:27.760 { 00:18:27.760 "dma_device_id": "system", 00:18:27.760 "dma_device_type": 1 00:18:27.760 }, 00:18:27.760 { 00:18:27.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.760 "dma_device_type": 2 00:18:27.760 } 00:18:27.760 ], 00:18:27.760 "driver_specific": {} 00:18:27.760 }' 00:18:27.760 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.760 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.760 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.761 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.761 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:28.018 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.275 "name": "BaseBdev4", 00:18:28.275 "aliases": [ 00:18:28.275 "36aea84d-48bd-11ef-a06c-59ddad71024c" 00:18:28.275 ], 00:18:28.275 "product_name": "Malloc disk", 00:18:28.275 "block_size": 512, 00:18:28.275 "num_blocks": 65536, 00:18:28.275 "uuid": "36aea84d-48bd-11ef-a06c-59ddad71024c", 00:18:28.275 "assigned_rate_limits": { 00:18:28.275 "rw_ios_per_sec": 0, 00:18:28.275 "rw_mbytes_per_sec": 0, 00:18:28.275 "r_mbytes_per_sec": 0, 00:18:28.275 "w_mbytes_per_sec": 0 00:18:28.275 }, 00:18:28.275 "claimed": true, 00:18:28.275 "claim_type": "exclusive_write", 00:18:28.275 "zoned": false, 00:18:28.275 "supported_io_types": { 00:18:28.275 "read": true, 00:18:28.275 "write": true, 00:18:28.275 "unmap": true, 00:18:28.275 "flush": true, 00:18:28.275 "reset": true, 00:18:28.275 "nvme_admin": false, 00:18:28.275 "nvme_io": false, 00:18:28.275 "nvme_io_md": false, 00:18:28.275 "write_zeroes": true, 00:18:28.275 "zcopy": true, 00:18:28.275 "get_zone_info": false, 00:18:28.275 "zone_management": false, 00:18:28.275 "zone_append": false, 00:18:28.275 "compare": false, 00:18:28.275 "compare_and_write": false, 00:18:28.275 "abort": true, 00:18:28.275 "seek_hole": false, 00:18:28.275 "seek_data": false, 00:18:28.275 "copy": true, 00:18:28.275 "nvme_iov_md": false 00:18:28.275 }, 00:18:28.275 "memory_domains": [ 00:18:28.275 { 00:18:28.275 "dma_device_id": "system", 00:18:28.275 "dma_device_type": 1 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.275 "dma_device_type": 2 00:18:28.275 } 00:18:28.275 ], 00:18:28.275 "driver_specific": {} 00:18:28.275 }' 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.275 06:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:28.533 [2024-07-23 06:31:41.041082] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.791 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.049 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.049 "name": "Existed_Raid", 00:18:29.049 "uuid": "348338c6-48bd-11ef-a06c-59ddad71024c", 00:18:29.049 "strip_size_kb": 0, 00:18:29.049 "state": "online", 00:18:29.049 "raid_level": "raid1", 00:18:29.049 "superblock": true, 00:18:29.049 "num_base_bdevs": 4, 00:18:29.049 "num_base_bdevs_discovered": 3, 00:18:29.049 "num_base_bdevs_operational": 3, 00:18:29.049 "base_bdevs_list": [ 00:18:29.049 { 00:18:29.049 "name": null, 00:18:29.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.049 "is_configured": false, 00:18:29.049 "data_offset": 2048, 00:18:29.049 "data_size": 63488 00:18:29.049 }, 00:18:29.049 { 00:18:29.049 "name": "BaseBdev2", 00:18:29.049 "uuid": "350a1d71-48bd-11ef-a06c-59ddad71024c", 00:18:29.049 "is_configured": true, 00:18:29.049 "data_offset": 2048, 00:18:29.049 "data_size": 63488 00:18:29.049 }, 00:18:29.049 { 00:18:29.049 "name": "BaseBdev3", 00:18:29.049 "uuid": "35e0f744-48bd-11ef-a06c-59ddad71024c", 00:18:29.049 "is_configured": true, 00:18:29.049 "data_offset": 2048, 00:18:29.049 "data_size": 63488 00:18:29.049 }, 00:18:29.049 { 00:18:29.049 "name": "BaseBdev4", 00:18:29.049 "uuid": "36aea84d-48bd-11ef-a06c-59ddad71024c", 00:18:29.049 "is_configured": true, 00:18:29.049 "data_offset": 2048, 00:18:29.049 "data_size": 63488 00:18:29.049 } 00:18:29.049 ] 00:18:29.049 }' 00:18:29.049 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.049 06:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.307 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:29.307 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:29.307 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.307 06:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:29.872 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:29.872 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:29.872 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:30.130 [2024-07-23 06:31:42.406996] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:30.130 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:30.130 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:30.130 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.130 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:30.389 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:30.389 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.389 06:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:30.660 [2024-07-23 06:31:43.052847] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.660 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:30.661 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:30.661 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.661 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:30.918 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:30.918 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.918 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:31.176 [2024-07-23 06:31:43.546887] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:31.176 [2024-07-23 06:31:43.546935] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.176 [2024-07-23 06:31:43.552830] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.176 [2024-07-23 06:31:43.552850] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.176 [2024-07-23 06:31:43.552856] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e3cba834a00 name Existed_Raid, state offline 00:18:31.176 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:31.176 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:31.176 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.176 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:31.434 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:31.434 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:31.434 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:31.434 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:31.434 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:31.434 06:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.692 BaseBdev2 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:31.692 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.951 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.209 [ 00:18:32.209 { 00:18:32.209 "name": "BaseBdev2", 00:18:32.209 "aliases": [ 00:18:32.209 "3a8f1126-48bd-11ef-a06c-59ddad71024c" 00:18:32.209 ], 00:18:32.209 "product_name": "Malloc disk", 00:18:32.209 "block_size": 512, 00:18:32.209 "num_blocks": 65536, 00:18:32.209 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:32.209 "assigned_rate_limits": { 00:18:32.209 "rw_ios_per_sec": 0, 00:18:32.209 "rw_mbytes_per_sec": 0, 00:18:32.209 "r_mbytes_per_sec": 0, 00:18:32.209 "w_mbytes_per_sec": 0 00:18:32.209 }, 00:18:32.209 "claimed": false, 00:18:32.209 "zoned": false, 00:18:32.209 "supported_io_types": { 00:18:32.209 "read": true, 00:18:32.209 "write": true, 00:18:32.209 "unmap": true, 00:18:32.209 "flush": true, 00:18:32.209 "reset": true, 00:18:32.209 "nvme_admin": false, 00:18:32.209 "nvme_io": false, 00:18:32.209 "nvme_io_md": false, 00:18:32.209 "write_zeroes": true, 00:18:32.209 "zcopy": true, 00:18:32.209 "get_zone_info": false, 00:18:32.209 "zone_management": false, 00:18:32.209 "zone_append": false, 00:18:32.209 "compare": false, 00:18:32.209 "compare_and_write": false, 00:18:32.209 "abort": true, 00:18:32.209 "seek_hole": false, 00:18:32.209 "seek_data": false, 00:18:32.209 "copy": true, 00:18:32.209 "nvme_iov_md": false 00:18:32.209 }, 00:18:32.209 "memory_domains": [ 00:18:32.209 { 00:18:32.209 "dma_device_id": "system", 00:18:32.209 "dma_device_type": 1 00:18:32.209 }, 00:18:32.209 { 00:18:32.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.209 "dma_device_type": 2 00:18:32.209 } 00:18:32.209 ], 00:18:32.209 "driver_specific": {} 00:18:32.209 } 00:18:32.209 ] 00:18:32.209 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:32.209 06:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:32.209 06:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:32.209 06:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:32.467 BaseBdev3 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:32.467 06:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.725 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:32.725 [ 00:18:32.725 { 00:18:32.725 "name": "BaseBdev3", 00:18:32.725 "aliases": [ 00:18:32.725 "3afffc30-48bd-11ef-a06c-59ddad71024c" 00:18:32.725 ], 00:18:32.725 "product_name": "Malloc disk", 00:18:32.725 "block_size": 512, 00:18:32.725 "num_blocks": 65536, 00:18:32.725 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:32.725 "assigned_rate_limits": { 00:18:32.725 "rw_ios_per_sec": 0, 00:18:32.725 "rw_mbytes_per_sec": 0, 00:18:32.725 "r_mbytes_per_sec": 0, 00:18:32.725 "w_mbytes_per_sec": 0 00:18:32.725 }, 00:18:32.725 "claimed": false, 00:18:32.725 "zoned": false, 00:18:32.725 "supported_io_types": { 00:18:32.725 "read": true, 00:18:32.725 "write": true, 00:18:32.725 "unmap": true, 00:18:32.725 "flush": true, 00:18:32.725 "reset": true, 00:18:32.725 "nvme_admin": false, 00:18:32.725 "nvme_io": false, 00:18:32.725 "nvme_io_md": false, 00:18:32.725 "write_zeroes": true, 00:18:32.726 "zcopy": true, 00:18:32.726 "get_zone_info": false, 00:18:32.726 "zone_management": false, 00:18:32.726 "zone_append": false, 00:18:32.726 "compare": false, 00:18:32.726 "compare_and_write": false, 00:18:32.726 "abort": true, 00:18:32.726 "seek_hole": false, 00:18:32.726 "seek_data": false, 00:18:32.726 "copy": true, 00:18:32.726 "nvme_iov_md": false 00:18:32.726 }, 00:18:32.726 "memory_domains": [ 00:18:32.726 { 00:18:32.726 "dma_device_id": "system", 00:18:32.726 "dma_device_type": 1 00:18:32.726 }, 00:18:32.726 { 00:18:32.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.726 "dma_device_type": 2 00:18:32.726 } 00:18:32.726 ], 00:18:32.726 "driver_specific": {} 00:18:32.726 } 00:18:32.726 ] 00:18:32.726 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:32.726 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:32.726 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:32.726 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:33.008 BaseBdev4 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:33.008 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.266 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:33.524 [ 00:18:33.524 { 00:18:33.524 "name": "BaseBdev4", 00:18:33.524 "aliases": [ 00:18:33.524 "3b6b689e-48bd-11ef-a06c-59ddad71024c" 00:18:33.524 ], 00:18:33.524 "product_name": "Malloc disk", 00:18:33.524 "block_size": 512, 00:18:33.524 "num_blocks": 65536, 00:18:33.524 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:33.524 "assigned_rate_limits": { 00:18:33.524 "rw_ios_per_sec": 0, 00:18:33.524 "rw_mbytes_per_sec": 0, 00:18:33.524 "r_mbytes_per_sec": 0, 00:18:33.524 "w_mbytes_per_sec": 0 00:18:33.524 }, 00:18:33.524 "claimed": false, 00:18:33.524 "zoned": false, 00:18:33.524 "supported_io_types": { 00:18:33.524 "read": true, 00:18:33.524 "write": true, 00:18:33.524 "unmap": true, 00:18:33.524 "flush": true, 00:18:33.524 "reset": true, 00:18:33.524 "nvme_admin": false, 00:18:33.524 "nvme_io": false, 00:18:33.524 "nvme_io_md": false, 00:18:33.524 "write_zeroes": true, 00:18:33.524 "zcopy": true, 00:18:33.524 "get_zone_info": false, 00:18:33.524 "zone_management": false, 00:18:33.524 "zone_append": false, 00:18:33.524 "compare": false, 00:18:33.524 "compare_and_write": false, 00:18:33.524 "abort": true, 00:18:33.524 "seek_hole": false, 00:18:33.524 "seek_data": false, 00:18:33.524 "copy": true, 00:18:33.524 "nvme_iov_md": false 00:18:33.524 }, 00:18:33.524 "memory_domains": [ 00:18:33.524 { 00:18:33.524 "dma_device_id": "system", 00:18:33.524 "dma_device_type": 1 00:18:33.524 }, 00:18:33.524 { 00:18:33.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.524 "dma_device_type": 2 00:18:33.524 } 00:18:33.524 ], 00:18:33.524 "driver_specific": {} 00:18:33.524 } 00:18:33.524 ] 00:18:33.524 06:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:33.524 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:33.524 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:33.524 06:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.783 [2024-07-23 06:31:46.188859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.783 [2024-07-23 06:31:46.188915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.783 [2024-07-23 06:31:46.188925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.783 [2024-07-23 06:31:46.189482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.783 [2024-07-23 06:31:46.189493] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.783 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.041 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.041 "name": "Existed_Raid", 00:18:34.041 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:34.041 "strip_size_kb": 0, 00:18:34.041 "state": "configuring", 00:18:34.041 "raid_level": "raid1", 00:18:34.041 "superblock": true, 00:18:34.041 "num_base_bdevs": 4, 00:18:34.041 "num_base_bdevs_discovered": 3, 00:18:34.041 "num_base_bdevs_operational": 4, 00:18:34.041 "base_bdevs_list": [ 00:18:34.041 { 00:18:34.041 "name": "BaseBdev1", 00:18:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.041 "is_configured": false, 00:18:34.041 "data_offset": 0, 00:18:34.041 "data_size": 0 00:18:34.041 }, 00:18:34.041 { 00:18:34.041 "name": "BaseBdev2", 00:18:34.041 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:34.041 "is_configured": true, 00:18:34.041 "data_offset": 2048, 00:18:34.041 "data_size": 63488 00:18:34.041 }, 00:18:34.041 { 00:18:34.041 "name": "BaseBdev3", 00:18:34.041 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:34.041 "is_configured": true, 00:18:34.041 "data_offset": 2048, 00:18:34.041 "data_size": 63488 00:18:34.041 }, 00:18:34.041 { 00:18:34.041 "name": "BaseBdev4", 00:18:34.041 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:34.041 "is_configured": true, 00:18:34.041 "data_offset": 2048, 00:18:34.041 "data_size": 63488 00:18:34.041 } 00:18:34.041 ] 00:18:34.041 }' 00:18:34.041 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.041 06:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.299 06:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:34.558 [2024-07-23 06:31:47.024866] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.558 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:34.558 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.558 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.558 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.559 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.124 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.124 "name": "Existed_Raid", 00:18:35.124 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:35.124 "strip_size_kb": 0, 00:18:35.124 "state": "configuring", 00:18:35.124 "raid_level": "raid1", 00:18:35.124 "superblock": true, 00:18:35.124 "num_base_bdevs": 4, 00:18:35.124 "num_base_bdevs_discovered": 2, 00:18:35.124 "num_base_bdevs_operational": 4, 00:18:35.124 "base_bdevs_list": [ 00:18:35.124 { 00:18:35.124 "name": "BaseBdev1", 00:18:35.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.124 "is_configured": false, 00:18:35.124 "data_offset": 0, 00:18:35.124 "data_size": 0 00:18:35.124 }, 00:18:35.124 { 00:18:35.124 "name": null, 00:18:35.124 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:35.124 "is_configured": false, 00:18:35.124 "data_offset": 2048, 00:18:35.124 "data_size": 63488 00:18:35.124 }, 00:18:35.124 { 00:18:35.124 "name": "BaseBdev3", 00:18:35.124 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:35.124 "is_configured": true, 00:18:35.124 "data_offset": 2048, 00:18:35.124 "data_size": 63488 00:18:35.124 }, 00:18:35.124 { 00:18:35.124 "name": "BaseBdev4", 00:18:35.124 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:35.124 "is_configured": true, 00:18:35.124 "data_offset": 2048, 00:18:35.124 "data_size": 63488 00:18:35.124 } 00:18:35.124 ] 00:18:35.124 }' 00:18:35.124 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.124 06:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.413 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.413 06:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:35.671 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:35.671 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:35.929 [2024-07-23 06:31:48.341059] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.929 BaseBdev1 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:35.929 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.187 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:36.444 [ 00:18:36.444 { 00:18:36.444 "name": "BaseBdev1", 00:18:36.444 "aliases": [ 00:18:36.445 "3d21a791-48bd-11ef-a06c-59ddad71024c" 00:18:36.445 ], 00:18:36.445 "product_name": "Malloc disk", 00:18:36.445 "block_size": 512, 00:18:36.445 "num_blocks": 65536, 00:18:36.445 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:36.445 "assigned_rate_limits": { 00:18:36.445 "rw_ios_per_sec": 0, 00:18:36.445 "rw_mbytes_per_sec": 0, 00:18:36.445 "r_mbytes_per_sec": 0, 00:18:36.445 "w_mbytes_per_sec": 0 00:18:36.445 }, 00:18:36.445 "claimed": true, 00:18:36.445 "claim_type": "exclusive_write", 00:18:36.445 "zoned": false, 00:18:36.445 "supported_io_types": { 00:18:36.445 "read": true, 00:18:36.445 "write": true, 00:18:36.445 "unmap": true, 00:18:36.445 "flush": true, 00:18:36.445 "reset": true, 00:18:36.445 "nvme_admin": false, 00:18:36.445 "nvme_io": false, 00:18:36.445 "nvme_io_md": false, 00:18:36.445 "write_zeroes": true, 00:18:36.445 "zcopy": true, 00:18:36.445 "get_zone_info": false, 00:18:36.445 "zone_management": false, 00:18:36.445 "zone_append": false, 00:18:36.445 "compare": false, 00:18:36.445 "compare_and_write": false, 00:18:36.445 "abort": true, 00:18:36.445 "seek_hole": false, 00:18:36.445 "seek_data": false, 00:18:36.445 "copy": true, 00:18:36.445 "nvme_iov_md": false 00:18:36.445 }, 00:18:36.445 "memory_domains": [ 00:18:36.445 { 00:18:36.445 "dma_device_id": "system", 00:18:36.445 "dma_device_type": 1 00:18:36.445 }, 00:18:36.445 { 00:18:36.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.445 "dma_device_type": 2 00:18:36.445 } 00:18:36.445 ], 00:18:36.445 "driver_specific": {} 00:18:36.445 } 00:18:36.445 ] 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.445 06:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.702 06:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.702 "name": "Existed_Raid", 00:18:36.702 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:36.702 "strip_size_kb": 0, 00:18:36.702 "state": "configuring", 00:18:36.702 "raid_level": "raid1", 00:18:36.702 "superblock": true, 00:18:36.702 "num_base_bdevs": 4, 00:18:36.702 "num_base_bdevs_discovered": 3, 00:18:36.702 "num_base_bdevs_operational": 4, 00:18:36.702 "base_bdevs_list": [ 00:18:36.702 { 00:18:36.702 "name": "BaseBdev1", 00:18:36.702 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:36.702 "is_configured": true, 00:18:36.702 "data_offset": 2048, 00:18:36.702 "data_size": 63488 00:18:36.702 }, 00:18:36.702 { 00:18:36.702 "name": null, 00:18:36.702 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:36.702 "is_configured": false, 00:18:36.702 "data_offset": 2048, 00:18:36.702 "data_size": 63488 00:18:36.702 }, 00:18:36.702 { 00:18:36.702 "name": "BaseBdev3", 00:18:36.702 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:36.702 "is_configured": true, 00:18:36.702 "data_offset": 2048, 00:18:36.702 "data_size": 63488 00:18:36.702 }, 00:18:36.702 { 00:18:36.702 "name": "BaseBdev4", 00:18:36.702 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:36.702 "is_configured": true, 00:18:36.702 "data_offset": 2048, 00:18:36.702 "data_size": 63488 00:18:36.702 } 00:18:36.702 ] 00:18:36.702 }' 00:18:36.702 06:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.702 06:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.960 06:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.960 06:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:37.530 06:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:37.530 06:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:37.795 [2024-07-23 06:31:50.056997] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.795 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.052 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.052 "name": "Existed_Raid", 00:18:38.052 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:38.052 "strip_size_kb": 0, 00:18:38.052 "state": "configuring", 00:18:38.052 "raid_level": "raid1", 00:18:38.052 "superblock": true, 00:18:38.052 "num_base_bdevs": 4, 00:18:38.052 "num_base_bdevs_discovered": 2, 00:18:38.052 "num_base_bdevs_operational": 4, 00:18:38.052 "base_bdevs_list": [ 00:18:38.052 { 00:18:38.052 "name": "BaseBdev1", 00:18:38.052 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:38.052 "is_configured": true, 00:18:38.052 "data_offset": 2048, 00:18:38.052 "data_size": 63488 00:18:38.052 }, 00:18:38.052 { 00:18:38.052 "name": null, 00:18:38.052 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:38.052 "is_configured": false, 00:18:38.052 "data_offset": 2048, 00:18:38.052 "data_size": 63488 00:18:38.052 }, 00:18:38.052 { 00:18:38.052 "name": null, 00:18:38.053 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:38.053 "is_configured": false, 00:18:38.053 "data_offset": 2048, 00:18:38.053 "data_size": 63488 00:18:38.053 }, 00:18:38.053 { 00:18:38.053 "name": "BaseBdev4", 00:18:38.053 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:38.053 "is_configured": true, 00:18:38.053 "data_offset": 2048, 00:18:38.053 "data_size": 63488 00:18:38.053 } 00:18:38.053 ] 00:18:38.053 }' 00:18:38.053 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.053 06:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.310 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.310 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:38.567 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:38.567 06:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:38.825 [2024-07-23 06:31:51.161000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.825 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.082 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.083 "name": "Existed_Raid", 00:18:39.083 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:39.083 "strip_size_kb": 0, 00:18:39.083 "state": "configuring", 00:18:39.083 "raid_level": "raid1", 00:18:39.083 "superblock": true, 00:18:39.083 "num_base_bdevs": 4, 00:18:39.083 "num_base_bdevs_discovered": 3, 00:18:39.083 "num_base_bdevs_operational": 4, 00:18:39.083 "base_bdevs_list": [ 00:18:39.083 { 00:18:39.083 "name": "BaseBdev1", 00:18:39.083 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:39.083 "is_configured": true, 00:18:39.083 "data_offset": 2048, 00:18:39.083 "data_size": 63488 00:18:39.083 }, 00:18:39.083 { 00:18:39.083 "name": null, 00:18:39.083 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:39.083 "is_configured": false, 00:18:39.083 "data_offset": 2048, 00:18:39.083 "data_size": 63488 00:18:39.083 }, 00:18:39.083 { 00:18:39.083 "name": "BaseBdev3", 00:18:39.083 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:39.083 "is_configured": true, 00:18:39.083 "data_offset": 2048, 00:18:39.083 "data_size": 63488 00:18:39.083 }, 00:18:39.083 { 00:18:39.083 "name": "BaseBdev4", 00:18:39.083 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:39.083 "is_configured": true, 00:18:39.083 "data_offset": 2048, 00:18:39.083 "data_size": 63488 00:18:39.083 } 00:18:39.083 ] 00:18:39.083 }' 00:18:39.083 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.083 06:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.340 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:39.340 06:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.904 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:39.904 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:39.904 [2024-07-23 06:31:52.413034] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.162 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.420 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.420 "name": "Existed_Raid", 00:18:40.420 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:40.420 "strip_size_kb": 0, 00:18:40.420 "state": "configuring", 00:18:40.420 "raid_level": "raid1", 00:18:40.420 "superblock": true, 00:18:40.420 "num_base_bdevs": 4, 00:18:40.420 "num_base_bdevs_discovered": 2, 00:18:40.420 "num_base_bdevs_operational": 4, 00:18:40.420 "base_bdevs_list": [ 00:18:40.420 { 00:18:40.420 "name": null, 00:18:40.420 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:40.420 "is_configured": false, 00:18:40.420 "data_offset": 2048, 00:18:40.420 "data_size": 63488 00:18:40.420 }, 00:18:40.420 { 00:18:40.420 "name": null, 00:18:40.420 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:40.420 "is_configured": false, 00:18:40.420 "data_offset": 2048, 00:18:40.420 "data_size": 63488 00:18:40.420 }, 00:18:40.420 { 00:18:40.420 "name": "BaseBdev3", 00:18:40.420 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:40.420 "is_configured": true, 00:18:40.420 "data_offset": 2048, 00:18:40.420 "data_size": 63488 00:18:40.420 }, 00:18:40.420 { 00:18:40.420 "name": "BaseBdev4", 00:18:40.420 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:40.420 "is_configured": true, 00:18:40.420 "data_offset": 2048, 00:18:40.420 "data_size": 63488 00:18:40.420 } 00:18:40.420 ] 00:18:40.420 }' 00:18:40.420 06:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.420 06:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.678 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.678 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:40.938 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:40.938 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:41.196 [2024-07-23 06:31:53.574862] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.196 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.453 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.453 "name": "Existed_Raid", 00:18:41.453 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:41.453 "strip_size_kb": 0, 00:18:41.453 "state": "configuring", 00:18:41.453 "raid_level": "raid1", 00:18:41.453 "superblock": true, 00:18:41.453 "num_base_bdevs": 4, 00:18:41.453 "num_base_bdevs_discovered": 3, 00:18:41.453 "num_base_bdevs_operational": 4, 00:18:41.453 "base_bdevs_list": [ 00:18:41.453 { 00:18:41.453 "name": null, 00:18:41.453 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:41.453 "is_configured": false, 00:18:41.453 "data_offset": 2048, 00:18:41.453 "data_size": 63488 00:18:41.453 }, 00:18:41.453 { 00:18:41.453 "name": "BaseBdev2", 00:18:41.453 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:41.453 "is_configured": true, 00:18:41.453 "data_offset": 2048, 00:18:41.453 "data_size": 63488 00:18:41.453 }, 00:18:41.453 { 00:18:41.453 "name": "BaseBdev3", 00:18:41.453 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:41.453 "is_configured": true, 00:18:41.453 "data_offset": 2048, 00:18:41.453 "data_size": 63488 00:18:41.453 }, 00:18:41.453 { 00:18:41.453 "name": "BaseBdev4", 00:18:41.453 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:41.453 "is_configured": true, 00:18:41.453 "data_offset": 2048, 00:18:41.453 "data_size": 63488 00:18:41.453 } 00:18:41.453 ] 00:18:41.453 }' 00:18:41.453 06:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.453 06:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.711 06:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.711 06:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:42.277 06:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:42.277 06:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.277 06:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:42.277 06:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3d21a791-48bd-11ef-a06c-59ddad71024c 00:18:42.536 [2024-07-23 06:31:55.023029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:42.536 [2024-07-23 06:31:55.023087] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e3cba834f00 00:18:42.536 [2024-07-23 06:31:55.023093] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:42.536 [2024-07-23 06:31:55.023114] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e3cba897e20 00:18:42.536 [2024-07-23 06:31:55.023163] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e3cba834f00 00:18:42.536 [2024-07-23 06:31:55.023168] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e3cba834f00 00:18:42.536 [2024-07-23 06:31:55.023187] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.536 NewBaseBdev 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:42.536 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.794 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:43.360 [ 00:18:43.360 { 00:18:43.360 "name": "NewBaseBdev", 00:18:43.360 "aliases": [ 00:18:43.360 "3d21a791-48bd-11ef-a06c-59ddad71024c" 00:18:43.360 ], 00:18:43.360 "product_name": "Malloc disk", 00:18:43.360 "block_size": 512, 00:18:43.360 "num_blocks": 65536, 00:18:43.360 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:43.360 "assigned_rate_limits": { 00:18:43.360 "rw_ios_per_sec": 0, 00:18:43.360 "rw_mbytes_per_sec": 0, 00:18:43.360 "r_mbytes_per_sec": 0, 00:18:43.360 "w_mbytes_per_sec": 0 00:18:43.360 }, 00:18:43.360 "claimed": true, 00:18:43.360 "claim_type": "exclusive_write", 00:18:43.360 "zoned": false, 00:18:43.360 "supported_io_types": { 00:18:43.360 "read": true, 00:18:43.360 "write": true, 00:18:43.360 "unmap": true, 00:18:43.360 "flush": true, 00:18:43.360 "reset": true, 00:18:43.360 "nvme_admin": false, 00:18:43.360 "nvme_io": false, 00:18:43.360 "nvme_io_md": false, 00:18:43.360 "write_zeroes": true, 00:18:43.360 "zcopy": true, 00:18:43.360 "get_zone_info": false, 00:18:43.360 "zone_management": false, 00:18:43.360 "zone_append": false, 00:18:43.360 "compare": false, 00:18:43.360 "compare_and_write": false, 00:18:43.360 "abort": true, 00:18:43.360 "seek_hole": false, 00:18:43.360 "seek_data": false, 00:18:43.360 "copy": true, 00:18:43.360 "nvme_iov_md": false 00:18:43.360 }, 00:18:43.360 "memory_domains": [ 00:18:43.360 { 00:18:43.360 "dma_device_id": "system", 00:18:43.360 "dma_device_type": 1 00:18:43.360 }, 00:18:43.360 { 00:18:43.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.360 "dma_device_type": 2 00:18:43.360 } 00:18:43.360 ], 00:18:43.360 "driver_specific": {} 00:18:43.360 } 00:18:43.360 ] 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.360 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.618 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.618 "name": "Existed_Raid", 00:18:43.618 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:43.618 "strip_size_kb": 0, 00:18:43.618 "state": "online", 00:18:43.618 "raid_level": "raid1", 00:18:43.618 "superblock": true, 00:18:43.618 "num_base_bdevs": 4, 00:18:43.618 "num_base_bdevs_discovered": 4, 00:18:43.618 "num_base_bdevs_operational": 4, 00:18:43.618 "base_bdevs_list": [ 00:18:43.618 { 00:18:43.618 "name": "NewBaseBdev", 00:18:43.618 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:43.618 "is_configured": true, 00:18:43.618 "data_offset": 2048, 00:18:43.618 "data_size": 63488 00:18:43.618 }, 00:18:43.618 { 00:18:43.618 "name": "BaseBdev2", 00:18:43.618 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:43.618 "is_configured": true, 00:18:43.618 "data_offset": 2048, 00:18:43.618 "data_size": 63488 00:18:43.618 }, 00:18:43.618 { 00:18:43.618 "name": "BaseBdev3", 00:18:43.618 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:43.618 "is_configured": true, 00:18:43.618 "data_offset": 2048, 00:18:43.618 "data_size": 63488 00:18:43.618 }, 00:18:43.618 { 00:18:43.618 "name": "BaseBdev4", 00:18:43.618 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:43.618 "is_configured": true, 00:18:43.618 "data_offset": 2048, 00:18:43.618 "data_size": 63488 00:18:43.618 } 00:18:43.618 ] 00:18:43.618 }' 00:18:43.618 06:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.618 06:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:43.876 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:44.134 [2024-07-23 06:31:56.495016] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.134 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:44.134 "name": "Existed_Raid", 00:18:44.134 "aliases": [ 00:18:44.134 "3bd94608-48bd-11ef-a06c-59ddad71024c" 00:18:44.134 ], 00:18:44.134 "product_name": "Raid Volume", 00:18:44.134 "block_size": 512, 00:18:44.134 "num_blocks": 63488, 00:18:44.134 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:44.134 "assigned_rate_limits": { 00:18:44.134 "rw_ios_per_sec": 0, 00:18:44.134 "rw_mbytes_per_sec": 0, 00:18:44.134 "r_mbytes_per_sec": 0, 00:18:44.134 "w_mbytes_per_sec": 0 00:18:44.134 }, 00:18:44.134 "claimed": false, 00:18:44.134 "zoned": false, 00:18:44.134 "supported_io_types": { 00:18:44.134 "read": true, 00:18:44.134 "write": true, 00:18:44.134 "unmap": false, 00:18:44.134 "flush": false, 00:18:44.134 "reset": true, 00:18:44.134 "nvme_admin": false, 00:18:44.134 "nvme_io": false, 00:18:44.134 "nvme_io_md": false, 00:18:44.134 "write_zeroes": true, 00:18:44.134 "zcopy": false, 00:18:44.134 "get_zone_info": false, 00:18:44.134 "zone_management": false, 00:18:44.134 "zone_append": false, 00:18:44.134 "compare": false, 00:18:44.134 "compare_and_write": false, 00:18:44.134 "abort": false, 00:18:44.134 "seek_hole": false, 00:18:44.134 "seek_data": false, 00:18:44.134 "copy": false, 00:18:44.134 "nvme_iov_md": false 00:18:44.134 }, 00:18:44.134 "memory_domains": [ 00:18:44.134 { 00:18:44.134 "dma_device_id": "system", 00:18:44.134 "dma_device_type": 1 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.134 "dma_device_type": 2 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "system", 00:18:44.134 "dma_device_type": 1 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.134 "dma_device_type": 2 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "system", 00:18:44.134 "dma_device_type": 1 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.134 "dma_device_type": 2 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "system", 00:18:44.134 "dma_device_type": 1 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.134 "dma_device_type": 2 00:18:44.134 } 00:18:44.134 ], 00:18:44.134 "driver_specific": { 00:18:44.134 "raid": { 00:18:44.134 "uuid": "3bd94608-48bd-11ef-a06c-59ddad71024c", 00:18:44.134 "strip_size_kb": 0, 00:18:44.134 "state": "online", 00:18:44.134 "raid_level": "raid1", 00:18:44.134 "superblock": true, 00:18:44.134 "num_base_bdevs": 4, 00:18:44.134 "num_base_bdevs_discovered": 4, 00:18:44.134 "num_base_bdevs_operational": 4, 00:18:44.134 "base_bdevs_list": [ 00:18:44.134 { 00:18:44.134 "name": "NewBaseBdev", 00:18:44.134 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:44.134 "is_configured": true, 00:18:44.134 "data_offset": 2048, 00:18:44.134 "data_size": 63488 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "name": "BaseBdev2", 00:18:44.134 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:44.134 "is_configured": true, 00:18:44.134 "data_offset": 2048, 00:18:44.134 "data_size": 63488 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "name": "BaseBdev3", 00:18:44.134 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:44.134 "is_configured": true, 00:18:44.134 "data_offset": 2048, 00:18:44.134 "data_size": 63488 00:18:44.134 }, 00:18:44.134 { 00:18:44.134 "name": "BaseBdev4", 00:18:44.134 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:44.134 "is_configured": true, 00:18:44.134 "data_offset": 2048, 00:18:44.134 "data_size": 63488 00:18:44.134 } 00:18:44.134 ] 00:18:44.134 } 00:18:44.134 } 00:18:44.134 }' 00:18:44.134 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.134 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:44.134 BaseBdev2 00:18:44.134 BaseBdev3 00:18:44.134 BaseBdev4' 00:18:44.134 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.134 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:44.134 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.394 "name": "NewBaseBdev", 00:18:44.394 "aliases": [ 00:18:44.394 "3d21a791-48bd-11ef-a06c-59ddad71024c" 00:18:44.394 ], 00:18:44.394 "product_name": "Malloc disk", 00:18:44.394 "block_size": 512, 00:18:44.394 "num_blocks": 65536, 00:18:44.394 "uuid": "3d21a791-48bd-11ef-a06c-59ddad71024c", 00:18:44.394 "assigned_rate_limits": { 00:18:44.394 "rw_ios_per_sec": 0, 00:18:44.394 "rw_mbytes_per_sec": 0, 00:18:44.394 "r_mbytes_per_sec": 0, 00:18:44.394 "w_mbytes_per_sec": 0 00:18:44.394 }, 00:18:44.394 "claimed": true, 00:18:44.394 "claim_type": "exclusive_write", 00:18:44.394 "zoned": false, 00:18:44.394 "supported_io_types": { 00:18:44.394 "read": true, 00:18:44.394 "write": true, 00:18:44.394 "unmap": true, 00:18:44.394 "flush": true, 00:18:44.394 "reset": true, 00:18:44.394 "nvme_admin": false, 00:18:44.394 "nvme_io": false, 00:18:44.394 "nvme_io_md": false, 00:18:44.394 "write_zeroes": true, 00:18:44.394 "zcopy": true, 00:18:44.394 "get_zone_info": false, 00:18:44.394 "zone_management": false, 00:18:44.394 "zone_append": false, 00:18:44.394 "compare": false, 00:18:44.394 "compare_and_write": false, 00:18:44.394 "abort": true, 00:18:44.394 "seek_hole": false, 00:18:44.394 "seek_data": false, 00:18:44.394 "copy": true, 00:18:44.394 "nvme_iov_md": false 00:18:44.394 }, 00:18:44.394 "memory_domains": [ 00:18:44.394 { 00:18:44.394 "dma_device_id": "system", 00:18:44.394 "dma_device_type": 1 00:18:44.394 }, 00:18:44.394 { 00:18:44.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.394 "dma_device_type": 2 00:18:44.394 } 00:18:44.394 ], 00:18:44.394 "driver_specific": {} 00:18:44.394 }' 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.394 06:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.962 "name": "BaseBdev2", 00:18:44.962 "aliases": [ 00:18:44.962 "3a8f1126-48bd-11ef-a06c-59ddad71024c" 00:18:44.962 ], 00:18:44.962 "product_name": "Malloc disk", 00:18:44.962 "block_size": 512, 00:18:44.962 "num_blocks": 65536, 00:18:44.962 "uuid": "3a8f1126-48bd-11ef-a06c-59ddad71024c", 00:18:44.962 "assigned_rate_limits": { 00:18:44.962 "rw_ios_per_sec": 0, 00:18:44.962 "rw_mbytes_per_sec": 0, 00:18:44.962 "r_mbytes_per_sec": 0, 00:18:44.962 "w_mbytes_per_sec": 0 00:18:44.962 }, 00:18:44.962 "claimed": true, 00:18:44.962 "claim_type": "exclusive_write", 00:18:44.962 "zoned": false, 00:18:44.962 "supported_io_types": { 00:18:44.962 "read": true, 00:18:44.962 "write": true, 00:18:44.962 "unmap": true, 00:18:44.962 "flush": true, 00:18:44.962 "reset": true, 00:18:44.962 "nvme_admin": false, 00:18:44.962 "nvme_io": false, 00:18:44.962 "nvme_io_md": false, 00:18:44.962 "write_zeroes": true, 00:18:44.962 "zcopy": true, 00:18:44.962 "get_zone_info": false, 00:18:44.962 "zone_management": false, 00:18:44.962 "zone_append": false, 00:18:44.962 "compare": false, 00:18:44.962 "compare_and_write": false, 00:18:44.962 "abort": true, 00:18:44.962 "seek_hole": false, 00:18:44.962 "seek_data": false, 00:18:44.962 "copy": true, 00:18:44.962 "nvme_iov_md": false 00:18:44.962 }, 00:18:44.962 "memory_domains": [ 00:18:44.962 { 00:18:44.962 "dma_device_id": "system", 00:18:44.962 "dma_device_type": 1 00:18:44.962 }, 00:18:44.962 { 00:18:44.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.962 "dma_device_type": 2 00:18:44.962 } 00:18:44.962 ], 00:18:44.962 "driver_specific": {} 00:18:44.962 }' 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:44.962 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.221 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.221 "name": "BaseBdev3", 00:18:45.221 "aliases": [ 00:18:45.221 "3afffc30-48bd-11ef-a06c-59ddad71024c" 00:18:45.221 ], 00:18:45.221 "product_name": "Malloc disk", 00:18:45.221 "block_size": 512, 00:18:45.221 "num_blocks": 65536, 00:18:45.221 "uuid": "3afffc30-48bd-11ef-a06c-59ddad71024c", 00:18:45.221 "assigned_rate_limits": { 00:18:45.221 "rw_ios_per_sec": 0, 00:18:45.221 "rw_mbytes_per_sec": 0, 00:18:45.221 "r_mbytes_per_sec": 0, 00:18:45.221 "w_mbytes_per_sec": 0 00:18:45.221 }, 00:18:45.221 "claimed": true, 00:18:45.221 "claim_type": "exclusive_write", 00:18:45.221 "zoned": false, 00:18:45.221 "supported_io_types": { 00:18:45.221 "read": true, 00:18:45.221 "write": true, 00:18:45.221 "unmap": true, 00:18:45.221 "flush": true, 00:18:45.221 "reset": true, 00:18:45.221 "nvme_admin": false, 00:18:45.221 "nvme_io": false, 00:18:45.221 "nvme_io_md": false, 00:18:45.221 "write_zeroes": true, 00:18:45.221 "zcopy": true, 00:18:45.221 "get_zone_info": false, 00:18:45.221 "zone_management": false, 00:18:45.221 "zone_append": false, 00:18:45.221 "compare": false, 00:18:45.221 "compare_and_write": false, 00:18:45.221 "abort": true, 00:18:45.221 "seek_hole": false, 00:18:45.221 "seek_data": false, 00:18:45.221 "copy": true, 00:18:45.221 "nvme_iov_md": false 00:18:45.221 }, 00:18:45.221 "memory_domains": [ 00:18:45.221 { 00:18:45.221 "dma_device_id": "system", 00:18:45.221 "dma_device_type": 1 00:18:45.221 }, 00:18:45.221 { 00:18:45.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.221 "dma_device_type": 2 00:18:45.221 } 00:18:45.221 ], 00:18:45.221 "driver_specific": {} 00:18:45.221 }' 00:18:45.221 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.221 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.221 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.221 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.221 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:45.222 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.480 "name": "BaseBdev4", 00:18:45.480 "aliases": [ 00:18:45.480 "3b6b689e-48bd-11ef-a06c-59ddad71024c" 00:18:45.480 ], 00:18:45.480 "product_name": "Malloc disk", 00:18:45.480 "block_size": 512, 00:18:45.480 "num_blocks": 65536, 00:18:45.480 "uuid": "3b6b689e-48bd-11ef-a06c-59ddad71024c", 00:18:45.480 "assigned_rate_limits": { 00:18:45.480 "rw_ios_per_sec": 0, 00:18:45.480 "rw_mbytes_per_sec": 0, 00:18:45.480 "r_mbytes_per_sec": 0, 00:18:45.480 "w_mbytes_per_sec": 0 00:18:45.480 }, 00:18:45.480 "claimed": true, 00:18:45.480 "claim_type": "exclusive_write", 00:18:45.480 "zoned": false, 00:18:45.480 "supported_io_types": { 00:18:45.480 "read": true, 00:18:45.480 "write": true, 00:18:45.480 "unmap": true, 00:18:45.480 "flush": true, 00:18:45.480 "reset": true, 00:18:45.480 "nvme_admin": false, 00:18:45.480 "nvme_io": false, 00:18:45.480 "nvme_io_md": false, 00:18:45.480 "write_zeroes": true, 00:18:45.480 "zcopy": true, 00:18:45.480 "get_zone_info": false, 00:18:45.480 "zone_management": false, 00:18:45.480 "zone_append": false, 00:18:45.480 "compare": false, 00:18:45.480 "compare_and_write": false, 00:18:45.480 "abort": true, 00:18:45.480 "seek_hole": false, 00:18:45.480 "seek_data": false, 00:18:45.480 "copy": true, 00:18:45.480 "nvme_iov_md": false 00:18:45.480 }, 00:18:45.480 "memory_domains": [ 00:18:45.480 { 00:18:45.480 "dma_device_id": "system", 00:18:45.480 "dma_device_type": 1 00:18:45.480 }, 00:18:45.480 { 00:18:45.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.480 "dma_device_type": 2 00:18:45.480 } 00:18:45.480 ], 00:18:45.480 "driver_specific": {} 00:18:45.480 }' 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.480 06:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:45.739 [2024-07-23 06:31:58.206967] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:45.739 [2024-07-23 06:31:58.206992] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.739 [2024-07-23 06:31:58.207031] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.739 [2024-07-23 06:31:58.207130] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.739 [2024-07-23 06:31:58.207150] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e3cba834f00 name Existed_Raid, state offline 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63886 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63886 ']' 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63886 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63886 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:45.739 killing process with pid 63886 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63886' 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63886 00:18:45.739 [2024-07-23 06:31:58.236063] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:45.739 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63886 00:18:45.739 [2024-07-23 06:31:58.260383] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.998 06:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:45.998 00:18:45.998 real 0m28.827s 00:18:45.998 user 0m53.174s 00:18:45.998 sys 0m3.583s 00:18:45.998 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.998 ************************************ 00:18:45.998 END TEST raid_state_function_test_sb 00:18:45.998 ************************************ 00:18:45.998 06:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.998 06:31:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:45.998 06:31:58 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:45.998 06:31:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:45.998 06:31:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.998 06:31:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.998 ************************************ 00:18:45.998 START TEST raid_superblock_test 00:18:45.998 ************************************ 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64708 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64708 /var/tmp/spdk-raid.sock 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64708 ']' 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.998 06:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.998 [2024-07-23 06:31:58.496843] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:45.998 [2024-07-23 06:31:58.497019] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:46.565 EAL: TSC is not safe to use in SMP mode 00:18:46.565 EAL: TSC is not invariant 00:18:46.565 [2024-07-23 06:31:59.059716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.823 [2024-07-23 06:31:59.159885] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:46.823 [2024-07-23 06:31:59.162421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.823 [2024-07-23 06:31:59.163358] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.823 [2024-07-23 06:31:59.163376] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:47.081 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:47.340 malloc1 00:18:47.340 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:47.598 [2024-07-23 06:31:59.969671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:47.598 [2024-07-23 06:31:59.969730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.598 [2024-07-23 06:31:59.969743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc234780 00:18:47.598 [2024-07-23 06:31:59.969751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.598 [2024-07-23 06:31:59.970748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.598 [2024-07-23 06:31:59.970776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:47.598 pt1 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:47.598 06:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:47.857 malloc2 00:18:47.857 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.116 [2024-07-23 06:32:00.453676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.116 [2024-07-23 06:32:00.453737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.116 [2024-07-23 06:32:00.453750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc234c80 00:18:48.116 [2024-07-23 06:32:00.453758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.116 [2024-07-23 06:32:00.454407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.116 [2024-07-23 06:32:00.454431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.116 pt2 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.116 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:48.375 malloc3 00:18:48.375 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:48.637 [2024-07-23 06:32:00.965682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:48.637 [2024-07-23 06:32:00.965738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.637 [2024-07-23 06:32:00.965750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235180 00:18:48.637 [2024-07-23 06:32:00.965759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.637 [2024-07-23 06:32:00.966433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.637 [2024-07-23 06:32:00.966457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:48.637 pt3 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.637 06:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:48.894 malloc4 00:18:48.894 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:49.153 [2024-07-23 06:32:01.473804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:49.153 [2024-07-23 06:32:01.473872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.153 [2024-07-23 06:32:01.473889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235680 00:18:49.153 [2024-07-23 06:32:01.473899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.153 [2024-07-23 06:32:01.474624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.153 [2024-07-23 06:32:01.474654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:49.153 pt4 00:18:49.153 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:49.153 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:49.153 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:49.411 [2024-07-23 06:32:01.769801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:49.411 [2024-07-23 06:32:01.770399] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:49.411 [2024-07-23 06:32:01.770419] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:49.411 [2024-07-23 06:32:01.770431] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:49.411 [2024-07-23 06:32:01.770486] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c24cc235900 00:18:49.411 [2024-07-23 06:32:01.770493] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:49.411 [2024-07-23 06:32:01.770528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c24cc297e20 00:18:49.411 [2024-07-23 06:32:01.770606] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c24cc235900 00:18:49.411 [2024-07-23 06:32:01.770611] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c24cc235900 00:18:49.411 [2024-07-23 06:32:01.770638] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.411 06:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.669 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.669 "name": "raid_bdev1", 00:18:49.669 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:49.669 "strip_size_kb": 0, 00:18:49.669 "state": "online", 00:18:49.669 "raid_level": "raid1", 00:18:49.669 "superblock": true, 00:18:49.669 "num_base_bdevs": 4, 00:18:49.669 "num_base_bdevs_discovered": 4, 00:18:49.669 "num_base_bdevs_operational": 4, 00:18:49.669 "base_bdevs_list": [ 00:18:49.669 { 00:18:49.669 "name": "pt1", 00:18:49.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:49.669 "is_configured": true, 00:18:49.669 "data_offset": 2048, 00:18:49.669 "data_size": 63488 00:18:49.669 }, 00:18:49.669 { 00:18:49.669 "name": "pt2", 00:18:49.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.669 "is_configured": true, 00:18:49.669 "data_offset": 2048, 00:18:49.669 "data_size": 63488 00:18:49.669 }, 00:18:49.669 { 00:18:49.669 "name": "pt3", 00:18:49.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:49.669 "is_configured": true, 00:18:49.669 "data_offset": 2048, 00:18:49.669 "data_size": 63488 00:18:49.669 }, 00:18:49.669 { 00:18:49.669 "name": "pt4", 00:18:49.669 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:49.669 "is_configured": true, 00:18:49.669 "data_offset": 2048, 00:18:49.669 "data_size": 63488 00:18:49.669 } 00:18:49.669 ] 00:18:49.669 }' 00:18:49.669 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.669 06:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:49.927 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:50.186 [2024-07-23 06:32:02.689861] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.444 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:50.444 "name": "raid_bdev1", 00:18:50.444 "aliases": [ 00:18:50.445 "4522bc8a-48bd-11ef-a06c-59ddad71024c" 00:18:50.445 ], 00:18:50.445 "product_name": "Raid Volume", 00:18:50.445 "block_size": 512, 00:18:50.445 "num_blocks": 63488, 00:18:50.445 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:50.445 "assigned_rate_limits": { 00:18:50.445 "rw_ios_per_sec": 0, 00:18:50.445 "rw_mbytes_per_sec": 0, 00:18:50.445 "r_mbytes_per_sec": 0, 00:18:50.445 "w_mbytes_per_sec": 0 00:18:50.445 }, 00:18:50.445 "claimed": false, 00:18:50.445 "zoned": false, 00:18:50.445 "supported_io_types": { 00:18:50.445 "read": true, 00:18:50.445 "write": true, 00:18:50.445 "unmap": false, 00:18:50.445 "flush": false, 00:18:50.445 "reset": true, 00:18:50.445 "nvme_admin": false, 00:18:50.445 "nvme_io": false, 00:18:50.445 "nvme_io_md": false, 00:18:50.445 "write_zeroes": true, 00:18:50.445 "zcopy": false, 00:18:50.445 "get_zone_info": false, 00:18:50.445 "zone_management": false, 00:18:50.445 "zone_append": false, 00:18:50.445 "compare": false, 00:18:50.445 "compare_and_write": false, 00:18:50.445 "abort": false, 00:18:50.445 "seek_hole": false, 00:18:50.445 "seek_data": false, 00:18:50.445 "copy": false, 00:18:50.445 "nvme_iov_md": false 00:18:50.445 }, 00:18:50.445 "memory_domains": [ 00:18:50.445 { 00:18:50.445 "dma_device_id": "system", 00:18:50.445 "dma_device_type": 1 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.445 "dma_device_type": 2 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "system", 00:18:50.445 "dma_device_type": 1 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.445 "dma_device_type": 2 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "system", 00:18:50.445 "dma_device_type": 1 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.445 "dma_device_type": 2 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "system", 00:18:50.445 "dma_device_type": 1 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.445 "dma_device_type": 2 00:18:50.445 } 00:18:50.445 ], 00:18:50.445 "driver_specific": { 00:18:50.445 "raid": { 00:18:50.445 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:50.445 "strip_size_kb": 0, 00:18:50.445 "state": "online", 00:18:50.445 "raid_level": "raid1", 00:18:50.445 "superblock": true, 00:18:50.445 "num_base_bdevs": 4, 00:18:50.445 "num_base_bdevs_discovered": 4, 00:18:50.445 "num_base_bdevs_operational": 4, 00:18:50.445 "base_bdevs_list": [ 00:18:50.445 { 00:18:50.445 "name": "pt1", 00:18:50.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.445 "is_configured": true, 00:18:50.445 "data_offset": 2048, 00:18:50.445 "data_size": 63488 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "name": "pt2", 00:18:50.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.445 "is_configured": true, 00:18:50.445 "data_offset": 2048, 00:18:50.445 "data_size": 63488 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "name": "pt3", 00:18:50.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:50.445 "is_configured": true, 00:18:50.445 "data_offset": 2048, 00:18:50.445 "data_size": 63488 00:18:50.445 }, 00:18:50.445 { 00:18:50.445 "name": "pt4", 00:18:50.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:50.445 "is_configured": true, 00:18:50.445 "data_offset": 2048, 00:18:50.445 "data_size": 63488 00:18:50.445 } 00:18:50.445 ] 00:18:50.445 } 00:18:50.445 } 00:18:50.445 }' 00:18:50.445 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.445 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:50.445 pt2 00:18:50.445 pt3 00:18:50.445 pt4' 00:18:50.445 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:50.445 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:50.445 06:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:50.777 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:50.777 "name": "pt1", 00:18:50.777 "aliases": [ 00:18:50.777 "00000000-0000-0000-0000-000000000001" 00:18:50.777 ], 00:18:50.777 "product_name": "passthru", 00:18:50.777 "block_size": 512, 00:18:50.777 "num_blocks": 65536, 00:18:50.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.777 "assigned_rate_limits": { 00:18:50.777 "rw_ios_per_sec": 0, 00:18:50.777 "rw_mbytes_per_sec": 0, 00:18:50.777 "r_mbytes_per_sec": 0, 00:18:50.777 "w_mbytes_per_sec": 0 00:18:50.777 }, 00:18:50.777 "claimed": true, 00:18:50.777 "claim_type": "exclusive_write", 00:18:50.777 "zoned": false, 00:18:50.777 "supported_io_types": { 00:18:50.777 "read": true, 00:18:50.777 "write": true, 00:18:50.777 "unmap": true, 00:18:50.777 "flush": true, 00:18:50.777 "reset": true, 00:18:50.777 "nvme_admin": false, 00:18:50.777 "nvme_io": false, 00:18:50.777 "nvme_io_md": false, 00:18:50.777 "write_zeroes": true, 00:18:50.777 "zcopy": true, 00:18:50.777 "get_zone_info": false, 00:18:50.777 "zone_management": false, 00:18:50.777 "zone_append": false, 00:18:50.777 "compare": false, 00:18:50.777 "compare_and_write": false, 00:18:50.777 "abort": true, 00:18:50.777 "seek_hole": false, 00:18:50.777 "seek_data": false, 00:18:50.777 "copy": true, 00:18:50.777 "nvme_iov_md": false 00:18:50.777 }, 00:18:50.777 "memory_domains": [ 00:18:50.777 { 00:18:50.777 "dma_device_id": "system", 00:18:50.778 "dma_device_type": 1 00:18:50.778 }, 00:18:50.778 { 00:18:50.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.778 "dma_device_type": 2 00:18:50.778 } 00:18:50.778 ], 00:18:50.778 "driver_specific": { 00:18:50.778 "passthru": { 00:18:50.778 "name": "pt1", 00:18:50.778 "base_bdev_name": "malloc1" 00:18:50.778 } 00:18:50.778 } 00:18:50.778 }' 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:50.778 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.034 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.034 "name": "pt2", 00:18:51.034 "aliases": [ 00:18:51.034 "00000000-0000-0000-0000-000000000002" 00:18:51.034 ], 00:18:51.034 "product_name": "passthru", 00:18:51.034 "block_size": 512, 00:18:51.034 "num_blocks": 65536, 00:18:51.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.034 "assigned_rate_limits": { 00:18:51.034 "rw_ios_per_sec": 0, 00:18:51.034 "rw_mbytes_per_sec": 0, 00:18:51.034 "r_mbytes_per_sec": 0, 00:18:51.034 "w_mbytes_per_sec": 0 00:18:51.034 }, 00:18:51.034 "claimed": true, 00:18:51.034 "claim_type": "exclusive_write", 00:18:51.034 "zoned": false, 00:18:51.034 "supported_io_types": { 00:18:51.034 "read": true, 00:18:51.034 "write": true, 00:18:51.034 "unmap": true, 00:18:51.034 "flush": true, 00:18:51.034 "reset": true, 00:18:51.034 "nvme_admin": false, 00:18:51.034 "nvme_io": false, 00:18:51.034 "nvme_io_md": false, 00:18:51.034 "write_zeroes": true, 00:18:51.034 "zcopy": true, 00:18:51.034 "get_zone_info": false, 00:18:51.034 "zone_management": false, 00:18:51.034 "zone_append": false, 00:18:51.034 "compare": false, 00:18:51.034 "compare_and_write": false, 00:18:51.034 "abort": true, 00:18:51.034 "seek_hole": false, 00:18:51.034 "seek_data": false, 00:18:51.034 "copy": true, 00:18:51.034 "nvme_iov_md": false 00:18:51.034 }, 00:18:51.034 "memory_domains": [ 00:18:51.034 { 00:18:51.034 "dma_device_id": "system", 00:18:51.034 "dma_device_type": 1 00:18:51.034 }, 00:18:51.034 { 00:18:51.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.034 "dma_device_type": 2 00:18:51.034 } 00:18:51.034 ], 00:18:51.034 "driver_specific": { 00:18:51.034 "passthru": { 00:18:51.034 "name": "pt2", 00:18:51.034 "base_bdev_name": "malloc2" 00:18:51.034 } 00:18:51.034 } 00:18:51.034 }' 00:18:51.034 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.034 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.034 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.034 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.035 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.292 "name": "pt3", 00:18:51.292 "aliases": [ 00:18:51.292 "00000000-0000-0000-0000-000000000003" 00:18:51.292 ], 00:18:51.292 "product_name": "passthru", 00:18:51.292 "block_size": 512, 00:18:51.292 "num_blocks": 65536, 00:18:51.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:51.292 "assigned_rate_limits": { 00:18:51.292 "rw_ios_per_sec": 0, 00:18:51.292 "rw_mbytes_per_sec": 0, 00:18:51.292 "r_mbytes_per_sec": 0, 00:18:51.292 "w_mbytes_per_sec": 0 00:18:51.292 }, 00:18:51.292 "claimed": true, 00:18:51.292 "claim_type": "exclusive_write", 00:18:51.292 "zoned": false, 00:18:51.292 "supported_io_types": { 00:18:51.292 "read": true, 00:18:51.292 "write": true, 00:18:51.292 "unmap": true, 00:18:51.292 "flush": true, 00:18:51.292 "reset": true, 00:18:51.292 "nvme_admin": false, 00:18:51.292 "nvme_io": false, 00:18:51.292 "nvme_io_md": false, 00:18:51.292 "write_zeroes": true, 00:18:51.292 "zcopy": true, 00:18:51.292 "get_zone_info": false, 00:18:51.292 "zone_management": false, 00:18:51.292 "zone_append": false, 00:18:51.292 "compare": false, 00:18:51.292 "compare_and_write": false, 00:18:51.292 "abort": true, 00:18:51.292 "seek_hole": false, 00:18:51.292 "seek_data": false, 00:18:51.292 "copy": true, 00:18:51.292 "nvme_iov_md": false 00:18:51.292 }, 00:18:51.292 "memory_domains": [ 00:18:51.292 { 00:18:51.292 "dma_device_id": "system", 00:18:51.292 "dma_device_type": 1 00:18:51.292 }, 00:18:51.292 { 00:18:51.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.292 "dma_device_type": 2 00:18:51.292 } 00:18:51.292 ], 00:18:51.292 "driver_specific": { 00:18:51.292 "passthru": { 00:18:51.292 "name": "pt3", 00:18:51.292 "base_bdev_name": "malloc3" 00:18:51.292 } 00:18:51.292 } 00:18:51.292 }' 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:51.292 06:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.857 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.857 "name": "pt4", 00:18:51.857 "aliases": [ 00:18:51.857 "00000000-0000-0000-0000-000000000004" 00:18:51.857 ], 00:18:51.857 "product_name": "passthru", 00:18:51.857 "block_size": 512, 00:18:51.857 "num_blocks": 65536, 00:18:51.857 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:51.857 "assigned_rate_limits": { 00:18:51.857 "rw_ios_per_sec": 0, 00:18:51.857 "rw_mbytes_per_sec": 0, 00:18:51.857 "r_mbytes_per_sec": 0, 00:18:51.857 "w_mbytes_per_sec": 0 00:18:51.857 }, 00:18:51.857 "claimed": true, 00:18:51.857 "claim_type": "exclusive_write", 00:18:51.857 "zoned": false, 00:18:51.857 "supported_io_types": { 00:18:51.857 "read": true, 00:18:51.857 "write": true, 00:18:51.857 "unmap": true, 00:18:51.857 "flush": true, 00:18:51.857 "reset": true, 00:18:51.857 "nvme_admin": false, 00:18:51.857 "nvme_io": false, 00:18:51.857 "nvme_io_md": false, 00:18:51.857 "write_zeroes": true, 00:18:51.857 "zcopy": true, 00:18:51.857 "get_zone_info": false, 00:18:51.858 "zone_management": false, 00:18:51.858 "zone_append": false, 00:18:51.858 "compare": false, 00:18:51.858 "compare_and_write": false, 00:18:51.858 "abort": true, 00:18:51.858 "seek_hole": false, 00:18:51.858 "seek_data": false, 00:18:51.858 "copy": true, 00:18:51.858 "nvme_iov_md": false 00:18:51.858 }, 00:18:51.858 "memory_domains": [ 00:18:51.858 { 00:18:51.858 "dma_device_id": "system", 00:18:51.858 "dma_device_type": 1 00:18:51.858 }, 00:18:51.858 { 00:18:51.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.858 "dma_device_type": 2 00:18:51.858 } 00:18:51.858 ], 00:18:51.858 "driver_specific": { 00:18:51.858 "passthru": { 00:18:51.858 "name": "pt4", 00:18:51.858 "base_bdev_name": "malloc4" 00:18:51.858 } 00:18:51.858 } 00:18:51.858 }' 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:51.858 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:52.115 [2024-07-23 06:32:04.473969] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.115 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4522bc8a-48bd-11ef-a06c-59ddad71024c 00:18:52.115 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4522bc8a-48bd-11ef-a06c-59ddad71024c ']' 00:18:52.115 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.372 [2024-07-23 06:32:04.801899] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.372 [2024-07-23 06:32:04.801938] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.372 [2024-07-23 06:32:04.801971] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.372 [2024-07-23 06:32:04.801997] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.372 [2024-07-23 06:32:04.802003] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c24cc235900 name raid_bdev1, state offline 00:18:52.373 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.373 06:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:52.967 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:52.967 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:52.967 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.967 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:53.223 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.223 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:53.480 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.480 06:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:53.736 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.736 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:53.994 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:53.994 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:54.252 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:54.510 [2024-07-23 06:32:06.973970] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:54.510 [2024-07-23 06:32:06.974610] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:54.510 [2024-07-23 06:32:06.974638] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:54.510 [2024-07-23 06:32:06.974646] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:54.510 [2024-07-23 06:32:06.974661] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:54.510 [2024-07-23 06:32:06.974699] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:54.510 [2024-07-23 06:32:06.974711] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:54.510 [2024-07-23 06:32:06.974728] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:54.510 [2024-07-23 06:32:06.974737] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.510 [2024-07-23 06:32:06.974742] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c24cc235680 name raid_bdev1, state configuring 00:18:54.510 request: 00:18:54.510 { 00:18:54.510 "name": "raid_bdev1", 00:18:54.510 "raid_level": "raid1", 00:18:54.510 "base_bdevs": [ 00:18:54.510 "malloc1", 00:18:54.510 "malloc2", 00:18:54.510 "malloc3", 00:18:54.510 "malloc4" 00:18:54.510 ], 00:18:54.510 "superblock": false, 00:18:54.510 "method": "bdev_raid_create", 00:18:54.510 "req_id": 1 00:18:54.510 } 00:18:54.510 Got JSON-RPC error response 00:18:54.510 response: 00:18:54.510 { 00:18:54.510 "code": -17, 00:18:54.510 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:54.510 } 00:18:54.510 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:54.510 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.510 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.510 06:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.510 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.510 06:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:54.768 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:54.768 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:54.768 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.026 [2024-07-23 06:32:07.453975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.026 [2024-07-23 06:32:07.454043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.026 [2024-07-23 06:32:07.454056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235180 00:18:55.026 [2024-07-23 06:32:07.454065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.026 [2024-07-23 06:32:07.454862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.026 [2024-07-23 06:32:07.454909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.026 [2024-07-23 06:32:07.454952] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:55.026 [2024-07-23 06:32:07.454973] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.026 pt1 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.026 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.283 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.283 "name": "raid_bdev1", 00:18:55.283 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:55.283 "strip_size_kb": 0, 00:18:55.283 "state": "configuring", 00:18:55.283 "raid_level": "raid1", 00:18:55.283 "superblock": true, 00:18:55.283 "num_base_bdevs": 4, 00:18:55.283 "num_base_bdevs_discovered": 1, 00:18:55.283 "num_base_bdevs_operational": 4, 00:18:55.283 "base_bdevs_list": [ 00:18:55.283 { 00:18:55.283 "name": "pt1", 00:18:55.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.283 "is_configured": true, 00:18:55.283 "data_offset": 2048, 00:18:55.283 "data_size": 63488 00:18:55.283 }, 00:18:55.283 { 00:18:55.283 "name": null, 00:18:55.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.283 "is_configured": false, 00:18:55.283 "data_offset": 2048, 00:18:55.283 "data_size": 63488 00:18:55.283 }, 00:18:55.283 { 00:18:55.283 "name": null, 00:18:55.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.283 "is_configured": false, 00:18:55.283 "data_offset": 2048, 00:18:55.283 "data_size": 63488 00:18:55.283 }, 00:18:55.283 { 00:18:55.283 "name": null, 00:18:55.283 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.283 "is_configured": false, 00:18:55.283 "data_offset": 2048, 00:18:55.283 "data_size": 63488 00:18:55.283 } 00:18:55.283 ] 00:18:55.283 }' 00:18:55.283 06:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.283 06:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.541 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:55.541 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.799 [2024-07-23 06:32:08.278012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.799 [2024-07-23 06:32:08.278124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.799 [2024-07-23 06:32:08.278138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc234780 00:18:55.799 [2024-07-23 06:32:08.278146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.799 [2024-07-23 06:32:08.278269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.799 [2024-07-23 06:32:08.278292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.799 [2024-07-23 06:32:08.278317] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:55.799 [2024-07-23 06:32:08.278326] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.799 pt2 00:18:55.799 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:56.057 [2024-07-23 06:32:08.518009] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.057 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.314 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.314 "name": "raid_bdev1", 00:18:56.315 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:56.315 "strip_size_kb": 0, 00:18:56.315 "state": "configuring", 00:18:56.315 "raid_level": "raid1", 00:18:56.315 "superblock": true, 00:18:56.315 "num_base_bdevs": 4, 00:18:56.315 "num_base_bdevs_discovered": 1, 00:18:56.315 "num_base_bdevs_operational": 4, 00:18:56.315 "base_bdevs_list": [ 00:18:56.315 { 00:18:56.315 "name": "pt1", 00:18:56.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.315 "is_configured": true, 00:18:56.315 "data_offset": 2048, 00:18:56.315 "data_size": 63488 00:18:56.315 }, 00:18:56.315 { 00:18:56.315 "name": null, 00:18:56.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.315 "is_configured": false, 00:18:56.315 "data_offset": 2048, 00:18:56.315 "data_size": 63488 00:18:56.315 }, 00:18:56.315 { 00:18:56.315 "name": null, 00:18:56.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.315 "is_configured": false, 00:18:56.315 "data_offset": 2048, 00:18:56.315 "data_size": 63488 00:18:56.315 }, 00:18:56.315 { 00:18:56.315 "name": null, 00:18:56.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.315 "is_configured": false, 00:18:56.315 "data_offset": 2048, 00:18:56.315 "data_size": 63488 00:18:56.315 } 00:18:56.315 ] 00:18:56.315 }' 00:18:56.315 06:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.315 06:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.572 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:56.572 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:56.572 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.830 [2024-07-23 06:32:09.302024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.830 [2024-07-23 06:32:09.302093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.831 [2024-07-23 06:32:09.302105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc234780 00:18:56.831 [2024-07-23 06:32:09.302113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.831 [2024-07-23 06:32:09.302235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.831 [2024-07-23 06:32:09.302246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.831 [2024-07-23 06:32:09.302269] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:56.831 [2024-07-23 06:32:09.302292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.831 pt2 00:18:56.831 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:56.831 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:56.831 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:57.398 [2024-07-23 06:32:09.634037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:57.398 [2024-07-23 06:32:09.634093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.398 [2024-07-23 06:32:09.634105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235b80 00:18:57.398 [2024-07-23 06:32:09.634113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.398 [2024-07-23 06:32:09.634227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.398 [2024-07-23 06:32:09.634238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:57.398 [2024-07-23 06:32:09.634261] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:57.398 [2024-07-23 06:32:09.634269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:57.398 pt3 00:18:57.398 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:57.398 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:57.398 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:57.656 [2024-07-23 06:32:09.930094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:57.656 [2024-07-23 06:32:09.930172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.656 [2024-07-23 06:32:09.930199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235900 00:18:57.656 [2024-07-23 06:32:09.930207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.656 [2024-07-23 06:32:09.930330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.656 [2024-07-23 06:32:09.930340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:57.656 [2024-07-23 06:32:09.930363] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:57.656 [2024-07-23 06:32:09.930371] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:57.656 [2024-07-23 06:32:09.930403] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c24cc234c80 00:18:57.656 [2024-07-23 06:32:09.930407] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:57.656 [2024-07-23 06:32:09.930428] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c24cc297e20 00:18:57.656 [2024-07-23 06:32:09.930484] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c24cc234c80 00:18:57.656 [2024-07-23 06:32:09.930489] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c24cc234c80 00:18:57.656 [2024-07-23 06:32:09.930510] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.656 pt4 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.656 06:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.915 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.915 "name": "raid_bdev1", 00:18:57.915 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:57.915 "strip_size_kb": 0, 00:18:57.915 "state": "online", 00:18:57.915 "raid_level": "raid1", 00:18:57.915 "superblock": true, 00:18:57.915 "num_base_bdevs": 4, 00:18:57.915 "num_base_bdevs_discovered": 4, 00:18:57.915 "num_base_bdevs_operational": 4, 00:18:57.915 "base_bdevs_list": [ 00:18:57.915 { 00:18:57.915 "name": "pt1", 00:18:57.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.915 "is_configured": true, 00:18:57.915 "data_offset": 2048, 00:18:57.915 "data_size": 63488 00:18:57.915 }, 00:18:57.915 { 00:18:57.915 "name": "pt2", 00:18:57.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.915 "is_configured": true, 00:18:57.915 "data_offset": 2048, 00:18:57.915 "data_size": 63488 00:18:57.915 }, 00:18:57.915 { 00:18:57.915 "name": "pt3", 00:18:57.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.915 "is_configured": true, 00:18:57.915 "data_offset": 2048, 00:18:57.915 "data_size": 63488 00:18:57.915 }, 00:18:57.915 { 00:18:57.915 "name": "pt4", 00:18:57.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.915 "is_configured": true, 00:18:57.915 "data_offset": 2048, 00:18:57.915 "data_size": 63488 00:18:57.915 } 00:18:57.915 ] 00:18:57.915 }' 00:18:57.915 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.915 06:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:58.173 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:58.431 [2024-07-23 06:32:10.738161] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.431 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:58.431 "name": "raid_bdev1", 00:18:58.431 "aliases": [ 00:18:58.431 "4522bc8a-48bd-11ef-a06c-59ddad71024c" 00:18:58.431 ], 00:18:58.431 "product_name": "Raid Volume", 00:18:58.431 "block_size": 512, 00:18:58.431 "num_blocks": 63488, 00:18:58.431 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:58.431 "assigned_rate_limits": { 00:18:58.431 "rw_ios_per_sec": 0, 00:18:58.431 "rw_mbytes_per_sec": 0, 00:18:58.431 "r_mbytes_per_sec": 0, 00:18:58.431 "w_mbytes_per_sec": 0 00:18:58.431 }, 00:18:58.431 "claimed": false, 00:18:58.431 "zoned": false, 00:18:58.431 "supported_io_types": { 00:18:58.431 "read": true, 00:18:58.431 "write": true, 00:18:58.431 "unmap": false, 00:18:58.431 "flush": false, 00:18:58.431 "reset": true, 00:18:58.431 "nvme_admin": false, 00:18:58.431 "nvme_io": false, 00:18:58.431 "nvme_io_md": false, 00:18:58.431 "write_zeroes": true, 00:18:58.431 "zcopy": false, 00:18:58.431 "get_zone_info": false, 00:18:58.431 "zone_management": false, 00:18:58.431 "zone_append": false, 00:18:58.431 "compare": false, 00:18:58.431 "compare_and_write": false, 00:18:58.431 "abort": false, 00:18:58.431 "seek_hole": false, 00:18:58.431 "seek_data": false, 00:18:58.431 "copy": false, 00:18:58.431 "nvme_iov_md": false 00:18:58.431 }, 00:18:58.431 "memory_domains": [ 00:18:58.431 { 00:18:58.431 "dma_device_id": "system", 00:18:58.431 "dma_device_type": 1 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.431 "dma_device_type": 2 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "system", 00:18:58.431 "dma_device_type": 1 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.431 "dma_device_type": 2 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "system", 00:18:58.431 "dma_device_type": 1 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.431 "dma_device_type": 2 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "system", 00:18:58.431 "dma_device_type": 1 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.431 "dma_device_type": 2 00:18:58.431 } 00:18:58.431 ], 00:18:58.431 "driver_specific": { 00:18:58.431 "raid": { 00:18:58.431 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:18:58.431 "strip_size_kb": 0, 00:18:58.431 "state": "online", 00:18:58.431 "raid_level": "raid1", 00:18:58.431 "superblock": true, 00:18:58.431 "num_base_bdevs": 4, 00:18:58.431 "num_base_bdevs_discovered": 4, 00:18:58.431 "num_base_bdevs_operational": 4, 00:18:58.431 "base_bdevs_list": [ 00:18:58.431 { 00:18:58.431 "name": "pt1", 00:18:58.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.431 "is_configured": true, 00:18:58.431 "data_offset": 2048, 00:18:58.431 "data_size": 63488 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "name": "pt2", 00:18:58.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.431 "is_configured": true, 00:18:58.431 "data_offset": 2048, 00:18:58.431 "data_size": 63488 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "name": "pt3", 00:18:58.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.431 "is_configured": true, 00:18:58.431 "data_offset": 2048, 00:18:58.431 "data_size": 63488 00:18:58.431 }, 00:18:58.431 { 00:18:58.431 "name": "pt4", 00:18:58.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.431 "is_configured": true, 00:18:58.431 "data_offset": 2048, 00:18:58.431 "data_size": 63488 00:18:58.431 } 00:18:58.431 ] 00:18:58.431 } 00:18:58.431 } 00:18:58.431 }' 00:18:58.431 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:58.431 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:58.431 pt2 00:18:58.431 pt3 00:18:58.431 pt4' 00:18:58.431 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.431 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.432 06:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.690 "name": "pt1", 00:18:58.690 "aliases": [ 00:18:58.690 "00000000-0000-0000-0000-000000000001" 00:18:58.690 ], 00:18:58.690 "product_name": "passthru", 00:18:58.690 "block_size": 512, 00:18:58.690 "num_blocks": 65536, 00:18:58.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.690 "assigned_rate_limits": { 00:18:58.690 "rw_ios_per_sec": 0, 00:18:58.690 "rw_mbytes_per_sec": 0, 00:18:58.690 "r_mbytes_per_sec": 0, 00:18:58.690 "w_mbytes_per_sec": 0 00:18:58.690 }, 00:18:58.690 "claimed": true, 00:18:58.690 "claim_type": "exclusive_write", 00:18:58.690 "zoned": false, 00:18:58.690 "supported_io_types": { 00:18:58.690 "read": true, 00:18:58.690 "write": true, 00:18:58.690 "unmap": true, 00:18:58.690 "flush": true, 00:18:58.690 "reset": true, 00:18:58.690 "nvme_admin": false, 00:18:58.690 "nvme_io": false, 00:18:58.690 "nvme_io_md": false, 00:18:58.690 "write_zeroes": true, 00:18:58.690 "zcopy": true, 00:18:58.690 "get_zone_info": false, 00:18:58.690 "zone_management": false, 00:18:58.690 "zone_append": false, 00:18:58.690 "compare": false, 00:18:58.690 "compare_and_write": false, 00:18:58.690 "abort": true, 00:18:58.690 "seek_hole": false, 00:18:58.690 "seek_data": false, 00:18:58.690 "copy": true, 00:18:58.690 "nvme_iov_md": false 00:18:58.690 }, 00:18:58.690 "memory_domains": [ 00:18:58.690 { 00:18:58.690 "dma_device_id": "system", 00:18:58.690 "dma_device_type": 1 00:18:58.690 }, 00:18:58.690 { 00:18:58.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.690 "dma_device_type": 2 00:18:58.690 } 00:18:58.690 ], 00:18:58.690 "driver_specific": { 00:18:58.690 "passthru": { 00:18:58.690 "name": "pt1", 00:18:58.690 "base_bdev_name": "malloc1" 00:18:58.690 } 00:18:58.690 } 00:18:58.690 }' 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:58.690 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.950 "name": "pt2", 00:18:58.950 "aliases": [ 00:18:58.950 "00000000-0000-0000-0000-000000000002" 00:18:58.950 ], 00:18:58.950 "product_name": "passthru", 00:18:58.950 "block_size": 512, 00:18:58.950 "num_blocks": 65536, 00:18:58.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.950 "assigned_rate_limits": { 00:18:58.950 "rw_ios_per_sec": 0, 00:18:58.950 "rw_mbytes_per_sec": 0, 00:18:58.950 "r_mbytes_per_sec": 0, 00:18:58.950 "w_mbytes_per_sec": 0 00:18:58.950 }, 00:18:58.950 "claimed": true, 00:18:58.950 "claim_type": "exclusive_write", 00:18:58.950 "zoned": false, 00:18:58.950 "supported_io_types": { 00:18:58.950 "read": true, 00:18:58.950 "write": true, 00:18:58.950 "unmap": true, 00:18:58.950 "flush": true, 00:18:58.950 "reset": true, 00:18:58.950 "nvme_admin": false, 00:18:58.950 "nvme_io": false, 00:18:58.950 "nvme_io_md": false, 00:18:58.950 "write_zeroes": true, 00:18:58.950 "zcopy": true, 00:18:58.950 "get_zone_info": false, 00:18:58.950 "zone_management": false, 00:18:58.950 "zone_append": false, 00:18:58.950 "compare": false, 00:18:58.950 "compare_and_write": false, 00:18:58.950 "abort": true, 00:18:58.950 "seek_hole": false, 00:18:58.950 "seek_data": false, 00:18:58.950 "copy": true, 00:18:58.950 "nvme_iov_md": false 00:18:58.950 }, 00:18:58.950 "memory_domains": [ 00:18:58.950 { 00:18:58.950 "dma_device_id": "system", 00:18:58.950 "dma_device_type": 1 00:18:58.950 }, 00:18:58.950 { 00:18:58.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.950 "dma_device_type": 2 00:18:58.950 } 00:18:58.950 ], 00:18:58.950 "driver_specific": { 00:18:58.950 "passthru": { 00:18:58.950 "name": "pt2", 00:18:58.950 "base_bdev_name": "malloc2" 00:18:58.950 } 00:18:58.950 } 00:18:58.950 }' 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:58.950 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.212 "name": "pt3", 00:18:59.212 "aliases": [ 00:18:59.212 "00000000-0000-0000-0000-000000000003" 00:18:59.212 ], 00:18:59.212 "product_name": "passthru", 00:18:59.212 "block_size": 512, 00:18:59.212 "num_blocks": 65536, 00:18:59.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.212 "assigned_rate_limits": { 00:18:59.212 "rw_ios_per_sec": 0, 00:18:59.212 "rw_mbytes_per_sec": 0, 00:18:59.212 "r_mbytes_per_sec": 0, 00:18:59.212 "w_mbytes_per_sec": 0 00:18:59.212 }, 00:18:59.212 "claimed": true, 00:18:59.212 "claim_type": "exclusive_write", 00:18:59.212 "zoned": false, 00:18:59.212 "supported_io_types": { 00:18:59.212 "read": true, 00:18:59.212 "write": true, 00:18:59.212 "unmap": true, 00:18:59.212 "flush": true, 00:18:59.212 "reset": true, 00:18:59.212 "nvme_admin": false, 00:18:59.212 "nvme_io": false, 00:18:59.212 "nvme_io_md": false, 00:18:59.212 "write_zeroes": true, 00:18:59.212 "zcopy": true, 00:18:59.212 "get_zone_info": false, 00:18:59.212 "zone_management": false, 00:18:59.212 "zone_append": false, 00:18:59.212 "compare": false, 00:18:59.212 "compare_and_write": false, 00:18:59.212 "abort": true, 00:18:59.212 "seek_hole": false, 00:18:59.212 "seek_data": false, 00:18:59.212 "copy": true, 00:18:59.212 "nvme_iov_md": false 00:18:59.212 }, 00:18:59.212 "memory_domains": [ 00:18:59.212 { 00:18:59.212 "dma_device_id": "system", 00:18:59.212 "dma_device_type": 1 00:18:59.212 }, 00:18:59.212 { 00:18:59.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.212 "dma_device_type": 2 00:18:59.212 } 00:18:59.212 ], 00:18:59.212 "driver_specific": { 00:18:59.212 "passthru": { 00:18:59.212 "name": "pt3", 00:18:59.212 "base_bdev_name": "malloc3" 00:18:59.212 } 00:18:59.212 } 00:18:59.212 }' 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.212 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.470 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.470 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.470 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.470 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:59.470 06:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.728 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.728 "name": "pt4", 00:18:59.728 "aliases": [ 00:18:59.728 "00000000-0000-0000-0000-000000000004" 00:18:59.728 ], 00:18:59.728 "product_name": "passthru", 00:18:59.728 "block_size": 512, 00:18:59.728 "num_blocks": 65536, 00:18:59.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.728 "assigned_rate_limits": { 00:18:59.728 "rw_ios_per_sec": 0, 00:18:59.728 "rw_mbytes_per_sec": 0, 00:18:59.729 "r_mbytes_per_sec": 0, 00:18:59.729 "w_mbytes_per_sec": 0 00:18:59.729 }, 00:18:59.729 "claimed": true, 00:18:59.729 "claim_type": "exclusive_write", 00:18:59.729 "zoned": false, 00:18:59.729 "supported_io_types": { 00:18:59.729 "read": true, 00:18:59.729 "write": true, 00:18:59.729 "unmap": true, 00:18:59.729 "flush": true, 00:18:59.729 "reset": true, 00:18:59.729 "nvme_admin": false, 00:18:59.729 "nvme_io": false, 00:18:59.729 "nvme_io_md": false, 00:18:59.729 "write_zeroes": true, 00:18:59.729 "zcopy": true, 00:18:59.729 "get_zone_info": false, 00:18:59.729 "zone_management": false, 00:18:59.729 "zone_append": false, 00:18:59.729 "compare": false, 00:18:59.729 "compare_and_write": false, 00:18:59.729 "abort": true, 00:18:59.729 "seek_hole": false, 00:18:59.729 "seek_data": false, 00:18:59.729 "copy": true, 00:18:59.729 "nvme_iov_md": false 00:18:59.729 }, 00:18:59.729 "memory_domains": [ 00:18:59.729 { 00:18:59.729 "dma_device_id": "system", 00:18:59.729 "dma_device_type": 1 00:18:59.729 }, 00:18:59.729 { 00:18:59.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.729 "dma_device_type": 2 00:18:59.729 } 00:18:59.729 ], 00:18:59.729 "driver_specific": { 00:18:59.729 "passthru": { 00:18:59.729 "name": "pt4", 00:18:59.729 "base_bdev_name": "malloc4" 00:18:59.729 } 00:18:59.729 } 00:18:59.729 }' 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:59.729 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:59.987 [2024-07-23 06:32:12.390196] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.987 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4522bc8a-48bd-11ef-a06c-59ddad71024c '!=' 4522bc8a-48bd-11ef-a06c-59ddad71024c ']' 00:18:59.987 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:59.987 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:59.987 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:59.987 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:00.245 [2024-07-23 06:32:12.686160] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.245 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.503 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.503 "name": "raid_bdev1", 00:19:00.503 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:19:00.503 "strip_size_kb": 0, 00:19:00.503 "state": "online", 00:19:00.503 "raid_level": "raid1", 00:19:00.503 "superblock": true, 00:19:00.503 "num_base_bdevs": 4, 00:19:00.503 "num_base_bdevs_discovered": 3, 00:19:00.503 "num_base_bdevs_operational": 3, 00:19:00.503 "base_bdevs_list": [ 00:19:00.503 { 00:19:00.503 "name": null, 00:19:00.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.503 "is_configured": false, 00:19:00.503 "data_offset": 2048, 00:19:00.503 "data_size": 63488 00:19:00.503 }, 00:19:00.503 { 00:19:00.503 "name": "pt2", 00:19:00.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.503 "is_configured": true, 00:19:00.503 "data_offset": 2048, 00:19:00.503 "data_size": 63488 00:19:00.503 }, 00:19:00.503 { 00:19:00.503 "name": "pt3", 00:19:00.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:00.503 "is_configured": true, 00:19:00.503 "data_offset": 2048, 00:19:00.503 "data_size": 63488 00:19:00.503 }, 00:19:00.503 { 00:19:00.503 "name": "pt4", 00:19:00.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:00.504 "is_configured": true, 00:19:00.504 "data_offset": 2048, 00:19:00.504 "data_size": 63488 00:19:00.504 } 00:19:00.504 ] 00:19:00.504 }' 00:19:00.504 06:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.504 06:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.844 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:01.120 [2024-07-23 06:32:13.534170] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.120 [2024-07-23 06:32:13.534196] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.120 [2024-07-23 06:32:13.534220] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.120 [2024-07-23 06:32:13.534237] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.120 [2024-07-23 06:32:13.534242] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c24cc234c80 name raid_bdev1, state offline 00:19:01.120 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.120 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:19:01.378 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:19:01.378 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:19:01.378 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:19:01.378 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:01.378 06:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:01.636 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:19:01.636 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:01.636 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:01.894 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:19:01.894 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:01.894 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:02.153 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:19:02.153 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:02.153 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:19:02.153 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:19:02.153 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.413 [2024-07-23 06:32:14.906202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.413 [2024-07-23 06:32:14.906264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.413 [2024-07-23 06:32:14.906277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235900 00:19:02.413 [2024-07-23 06:32:14.906285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.413 [2024-07-23 06:32:14.906937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.413 [2024-07-23 06:32:14.906963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.413 [2024-07-23 06:32:14.906988] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:02.413 [2024-07-23 06:32:14.907000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.413 pt2 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.413 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.414 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.414 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.414 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.414 06:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.672 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.672 "name": "raid_bdev1", 00:19:02.672 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:19:02.672 "strip_size_kb": 0, 00:19:02.672 "state": "configuring", 00:19:02.672 "raid_level": "raid1", 00:19:02.672 "superblock": true, 00:19:02.672 "num_base_bdevs": 4, 00:19:02.672 "num_base_bdevs_discovered": 1, 00:19:02.672 "num_base_bdevs_operational": 3, 00:19:02.672 "base_bdevs_list": [ 00:19:02.672 { 00:19:02.672 "name": null, 00:19:02.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.672 "is_configured": false, 00:19:02.672 "data_offset": 2048, 00:19:02.672 "data_size": 63488 00:19:02.672 }, 00:19:02.672 { 00:19:02.672 "name": "pt2", 00:19:02.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.672 "is_configured": true, 00:19:02.672 "data_offset": 2048, 00:19:02.672 "data_size": 63488 00:19:02.672 }, 00:19:02.672 { 00:19:02.672 "name": null, 00:19:02.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:02.672 "is_configured": false, 00:19:02.672 "data_offset": 2048, 00:19:02.672 "data_size": 63488 00:19:02.672 }, 00:19:02.672 { 00:19:02.672 "name": null, 00:19:02.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:02.672 "is_configured": false, 00:19:02.672 "data_offset": 2048, 00:19:02.672 "data_size": 63488 00:19:02.672 } 00:19:02.672 ] 00:19:02.672 }' 00:19:02.672 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.672 06:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.242 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:19:03.242 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:19:03.242 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:03.500 [2024-07-23 06:32:15.838274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:03.500 [2024-07-23 06:32:15.838337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.500 [2024-07-23 06:32:15.838351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235680 00:19:03.500 [2024-07-23 06:32:15.838359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.500 [2024-07-23 06:32:15.838481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.500 [2024-07-23 06:32:15.838493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:03.500 [2024-07-23 06:32:15.838516] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:03.500 [2024-07-23 06:32:15.838525] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:03.500 pt3 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.500 06:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.759 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.759 "name": "raid_bdev1", 00:19:03.759 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:19:03.759 "strip_size_kb": 0, 00:19:03.759 "state": "configuring", 00:19:03.759 "raid_level": "raid1", 00:19:03.759 "superblock": true, 00:19:03.759 "num_base_bdevs": 4, 00:19:03.759 "num_base_bdevs_discovered": 2, 00:19:03.759 "num_base_bdevs_operational": 3, 00:19:03.759 "base_bdevs_list": [ 00:19:03.759 { 00:19:03.759 "name": null, 00:19:03.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.759 "is_configured": false, 00:19:03.759 "data_offset": 2048, 00:19:03.759 "data_size": 63488 00:19:03.759 }, 00:19:03.759 { 00:19:03.759 "name": "pt2", 00:19:03.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.759 "is_configured": true, 00:19:03.759 "data_offset": 2048, 00:19:03.759 "data_size": 63488 00:19:03.759 }, 00:19:03.759 { 00:19:03.759 "name": "pt3", 00:19:03.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.759 "is_configured": true, 00:19:03.759 "data_offset": 2048, 00:19:03.759 "data_size": 63488 00:19:03.759 }, 00:19:03.759 { 00:19:03.759 "name": null, 00:19:03.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.759 "is_configured": false, 00:19:03.759 "data_offset": 2048, 00:19:03.759 "data_size": 63488 00:19:03.759 } 00:19:03.759 ] 00:19:03.759 }' 00:19:03.759 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.759 06:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.017 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:19:04.017 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:19:04.017 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:19:04.017 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:04.277 [2024-07-23 06:32:16.706312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:04.277 [2024-07-23 06:32:16.706377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.277 [2024-07-23 06:32:16.706390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc234c80 00:19:04.277 [2024-07-23 06:32:16.706398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.277 [2024-07-23 06:32:16.706533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.277 [2024-07-23 06:32:16.706545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:04.277 [2024-07-23 06:32:16.706569] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:04.277 [2024-07-23 06:32:16.706577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:04.277 [2024-07-23 06:32:16.706616] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c24cc234780 00:19:04.277 [2024-07-23 06:32:16.706621] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:04.277 [2024-07-23 06:32:16.706643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c24cc297e20 00:19:04.277 [2024-07-23 06:32:16.706692] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c24cc234780 00:19:04.277 [2024-07-23 06:32:16.706697] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c24cc234780 00:19:04.277 [2024-07-23 06:32:16.706718] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.277 pt4 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.277 06:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.536 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.536 "name": "raid_bdev1", 00:19:04.536 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:19:04.536 "strip_size_kb": 0, 00:19:04.536 "state": "online", 00:19:04.536 "raid_level": "raid1", 00:19:04.536 "superblock": true, 00:19:04.536 "num_base_bdevs": 4, 00:19:04.536 "num_base_bdevs_discovered": 3, 00:19:04.536 "num_base_bdevs_operational": 3, 00:19:04.536 "base_bdevs_list": [ 00:19:04.536 { 00:19:04.536 "name": null, 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.536 "is_configured": false, 00:19:04.536 "data_offset": 2048, 00:19:04.536 "data_size": 63488 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "pt2", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 2048, 00:19:04.536 "data_size": 63488 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "pt3", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 2048, 00:19:04.536 "data_size": 63488 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "pt4", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 2048, 00:19:04.536 "data_size": 63488 00:19:04.536 } 00:19:04.536 ] 00:19:04.536 }' 00:19:04.536 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.536 06:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.168 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:05.168 [2024-07-23 06:32:17.662325] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.168 [2024-07-23 06:32:17.662351] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.168 [2024-07-23 06:32:17.662376] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.168 [2024-07-23 06:32:17.662394] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.168 [2024-07-23 06:32:17.662399] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c24cc234780 name raid_bdev1, state offline 00:19:05.426 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.426 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:19:05.426 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:19:05.426 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:19:05.427 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:19:05.427 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:19:05.427 06:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:05.684 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.250 [2024-07-23 06:32:18.482342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.250 [2024-07-23 06:32:18.482403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.250 [2024-07-23 06:32:18.482416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc234c80 00:19:06.250 [2024-07-23 06:32:18.482425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.250 [2024-07-23 06:32:18.483093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.250 [2024-07-23 06:32:18.483120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.250 [2024-07-23 06:32:18.483147] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:06.250 [2024-07-23 06:32:18.483158] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.250 [2024-07-23 06:32:18.483189] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:06.250 [2024-07-23 06:32:18.483193] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.250 [2024-07-23 06:32:18.483198] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c24cc234780 name raid_bdev1, state configuring 00:19:06.250 [2024-07-23 06:32:18.483206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.250 [2024-07-23 06:32:18.483225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:06.250 pt1 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.250 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.509 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.509 "name": "raid_bdev1", 00:19:06.509 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:19:06.509 "strip_size_kb": 0, 00:19:06.509 "state": "configuring", 00:19:06.509 "raid_level": "raid1", 00:19:06.509 "superblock": true, 00:19:06.509 "num_base_bdevs": 4, 00:19:06.509 "num_base_bdevs_discovered": 2, 00:19:06.509 "num_base_bdevs_operational": 3, 00:19:06.509 "base_bdevs_list": [ 00:19:06.509 { 00:19:06.509 "name": null, 00:19:06.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.509 "is_configured": false, 00:19:06.509 "data_offset": 2048, 00:19:06.509 "data_size": 63488 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "name": "pt2", 00:19:06.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.509 "is_configured": true, 00:19:06.509 "data_offset": 2048, 00:19:06.509 "data_size": 63488 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "name": "pt3", 00:19:06.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.509 "is_configured": true, 00:19:06.509 "data_offset": 2048, 00:19:06.509 "data_size": 63488 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "name": null, 00:19:06.509 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.509 "is_configured": false, 00:19:06.509 "data_offset": 2048, 00:19:06.509 "data_size": 63488 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 }' 00:19:06.509 06:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.509 06:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.768 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:19:06.768 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:07.026 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:19:07.026 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:07.285 [2024-07-23 06:32:19.602371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:07.285 [2024-07-23 06:32:19.602427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.285 [2024-07-23 06:32:19.602440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c24cc235180 00:19:07.285 [2024-07-23 06:32:19.602448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.285 [2024-07-23 06:32:19.602565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.285 [2024-07-23 06:32:19.602577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:07.285 [2024-07-23 06:32:19.602601] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:07.285 [2024-07-23 06:32:19.602610] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:07.285 [2024-07-23 06:32:19.602641] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c24cc234780 00:19:07.285 [2024-07-23 06:32:19.602646] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:07.285 [2024-07-23 06:32:19.602667] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c24cc297e20 00:19:07.285 [2024-07-23 06:32:19.602716] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c24cc234780 00:19:07.285 [2024-07-23 06:32:19.602720] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c24cc234780 00:19:07.285 [2024-07-23 06:32:19.602741] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.285 pt4 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.285 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.543 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.543 "name": "raid_bdev1", 00:19:07.543 "uuid": "4522bc8a-48bd-11ef-a06c-59ddad71024c", 00:19:07.543 "strip_size_kb": 0, 00:19:07.543 "state": "online", 00:19:07.543 "raid_level": "raid1", 00:19:07.543 "superblock": true, 00:19:07.543 "num_base_bdevs": 4, 00:19:07.543 "num_base_bdevs_discovered": 3, 00:19:07.543 "num_base_bdevs_operational": 3, 00:19:07.543 "base_bdevs_list": [ 00:19:07.543 { 00:19:07.543 "name": null, 00:19:07.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.543 "is_configured": false, 00:19:07.543 "data_offset": 2048, 00:19:07.543 "data_size": 63488 00:19:07.543 }, 00:19:07.543 { 00:19:07.543 "name": "pt2", 00:19:07.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.543 "is_configured": true, 00:19:07.543 "data_offset": 2048, 00:19:07.543 "data_size": 63488 00:19:07.543 }, 00:19:07.543 { 00:19:07.543 "name": "pt3", 00:19:07.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.543 "is_configured": true, 00:19:07.543 "data_offset": 2048, 00:19:07.543 "data_size": 63488 00:19:07.543 }, 00:19:07.543 { 00:19:07.543 "name": "pt4", 00:19:07.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.543 "is_configured": true, 00:19:07.543 "data_offset": 2048, 00:19:07.543 "data_size": 63488 00:19:07.543 } 00:19:07.543 ] 00:19:07.543 }' 00:19:07.543 06:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.543 06:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.801 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:07.801 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:19:08.060 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:19:08.060 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:19:08.060 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:08.318 [2024-07-23 06:32:20.706441] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4522bc8a-48bd-11ef-a06c-59ddad71024c '!=' 4522bc8a-48bd-11ef-a06c-59ddad71024c ']' 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64708 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64708 ']' 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64708 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64708 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:08.318 killing process with pid 64708 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64708' 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64708 00:19:08.318 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64708 00:19:08.318 [2024-07-23 06:32:20.737423] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.318 [2024-07-23 06:32:20.737467] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.318 [2024-07-23 06:32:20.737493] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.318 [2024-07-23 06:32:20.737500] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c24cc234780 name raid_bdev1, state offline 00:19:08.318 [2024-07-23 06:32:20.761796] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.577 06:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:08.577 00:19:08.577 real 0m22.454s 00:19:08.577 user 0m40.954s 00:19:08.577 sys 0m3.119s 00:19:08.577 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.577 06:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.577 ************************************ 00:19:08.577 END TEST raid_superblock_test 00:19:08.577 ************************************ 00:19:08.577 06:32:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:08.577 06:32:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:19:08.577 06:32:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:08.577 06:32:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.577 06:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.577 ************************************ 00:19:08.577 START TEST raid_read_error_test 00:19:08.577 ************************************ 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.7hGJmqAkfs 00:19:08.577 06:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65352 00:19:08.577 06:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65352 /var/tmp/spdk-raid.sock 00:19:08.577 06:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:08.577 06:32:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 65352 ']' 00:19:08.577 06:32:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:08.577 06:32:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:08.577 06:32:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:08.578 06:32:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.578 06:32:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.578 [2024-07-23 06:32:21.006494] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:08.578 [2024-07-23 06:32:21.006673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.145 EAL: TSC is not safe to use in SMP mode 00:19:09.145 EAL: TSC is not invariant 00:19:09.145 [2024-07-23 06:32:21.589295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.405 [2024-07-23 06:32:21.689090] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:09.405 [2024-07-23 06:32:21.691637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.405 [2024-07-23 06:32:21.692591] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.405 [2024-07-23 06:32:21.692608] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.664 06:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.664 06:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:09.664 06:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:09.664 06:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:09.922 BaseBdev1_malloc 00:19:09.922 06:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:10.181 true 00:19:10.181 06:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:10.439 [2024-07-23 06:32:22.833861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:10.439 [2024-07-23 06:32:22.833935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.439 [2024-07-23 06:32:22.833973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ef6a834780 00:19:10.439 [2024-07-23 06:32:22.833983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.439 [2024-07-23 06:32:22.834647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.439 [2024-07-23 06:32:22.834675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:10.439 BaseBdev1 00:19:10.439 06:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:10.439 06:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:10.698 BaseBdev2_malloc 00:19:10.698 06:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:10.957 true 00:19:10.957 06:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:11.216 [2024-07-23 06:32:23.653872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:11.216 [2024-07-23 06:32:23.653929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.216 [2024-07-23 06:32:23.653966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ef6a834c80 00:19:11.216 [2024-07-23 06:32:23.653983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.216 [2024-07-23 06:32:23.654649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.216 [2024-07-23 06:32:23.654671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.216 BaseBdev2 00:19:11.216 06:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:11.216 06:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:11.494 BaseBdev3_malloc 00:19:11.494 06:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:11.752 true 00:19:11.752 06:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:12.011 [2024-07-23 06:32:24.449958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:12.011 [2024-07-23 06:32:24.450073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.011 [2024-07-23 06:32:24.450130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ef6a835180 00:19:12.011 [2024-07-23 06:32:24.450150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.011 [2024-07-23 06:32:24.451132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.011 [2024-07-23 06:32:24.451169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:12.011 BaseBdev3 00:19:12.011 06:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:12.011 06:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:12.270 BaseBdev4_malloc 00:19:12.270 06:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:12.529 true 00:19:12.529 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:12.787 [2024-07-23 06:32:25.233932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:12.787 [2024-07-23 06:32:25.233997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.787 [2024-07-23 06:32:25.234029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ef6a835680 00:19:12.787 [2024-07-23 06:32:25.234039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.787 [2024-07-23 06:32:25.234757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.787 [2024-07-23 06:32:25.234783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:12.787 BaseBdev4 00:19:12.787 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:13.047 [2024-07-23 06:32:25.469937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.047 [2024-07-23 06:32:25.470547] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.047 [2024-07-23 06:32:25.470573] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.047 [2024-07-23 06:32:25.470588] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:13.047 [2024-07-23 06:32:25.470658] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x4ef6a835900 00:19:13.047 [2024-07-23 06:32:25.470664] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:13.047 [2024-07-23 06:32:25.470702] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x4ef6a8a0e20 00:19:13.047 [2024-07-23 06:32:25.470791] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x4ef6a835900 00:19:13.047 [2024-07-23 06:32:25.470796] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x4ef6a835900 00:19:13.047 [2024-07-23 06:32:25.470827] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.047 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.306 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.306 "name": "raid_bdev1", 00:19:13.307 "uuid": "534316fd-48bd-11ef-a06c-59ddad71024c", 00:19:13.307 "strip_size_kb": 0, 00:19:13.307 "state": "online", 00:19:13.307 "raid_level": "raid1", 00:19:13.307 "superblock": true, 00:19:13.307 "num_base_bdevs": 4, 00:19:13.307 "num_base_bdevs_discovered": 4, 00:19:13.307 "num_base_bdevs_operational": 4, 00:19:13.307 "base_bdevs_list": [ 00:19:13.307 { 00:19:13.307 "name": "BaseBdev1", 00:19:13.307 "uuid": "78cf0c16-2a75-5f51-be16-4e05bdd0c946", 00:19:13.307 "is_configured": true, 00:19:13.307 "data_offset": 2048, 00:19:13.307 "data_size": 63488 00:19:13.307 }, 00:19:13.307 { 00:19:13.307 "name": "BaseBdev2", 00:19:13.307 "uuid": "248cf8ae-4ca0-265f-8a8f-c85b2e08274b", 00:19:13.307 "is_configured": true, 00:19:13.307 "data_offset": 2048, 00:19:13.307 "data_size": 63488 00:19:13.307 }, 00:19:13.307 { 00:19:13.307 "name": "BaseBdev3", 00:19:13.307 "uuid": "ea33d4fd-784a-a655-b425-d83cd3fafaca", 00:19:13.307 "is_configured": true, 00:19:13.307 "data_offset": 2048, 00:19:13.307 "data_size": 63488 00:19:13.307 }, 00:19:13.307 { 00:19:13.307 "name": "BaseBdev4", 00:19:13.307 "uuid": "8770e5c3-681e-c35d-a5c8-b85569cd33a5", 00:19:13.307 "is_configured": true, 00:19:13.307 "data_offset": 2048, 00:19:13.307 "data_size": 63488 00:19:13.307 } 00:19:13.307 ] 00:19:13.307 }' 00:19:13.307 06:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.307 06:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.566 06:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:13.566 06:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:13.826 [2024-07-23 06:32:26.170126] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x4ef6a8a0ec0 00:19:14.763 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:15.022 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:15.022 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:15.022 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.023 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.282 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.282 "name": "raid_bdev1", 00:19:15.282 "uuid": "534316fd-48bd-11ef-a06c-59ddad71024c", 00:19:15.282 "strip_size_kb": 0, 00:19:15.282 "state": "online", 00:19:15.282 "raid_level": "raid1", 00:19:15.282 "superblock": true, 00:19:15.282 "num_base_bdevs": 4, 00:19:15.282 "num_base_bdevs_discovered": 4, 00:19:15.282 "num_base_bdevs_operational": 4, 00:19:15.282 "base_bdevs_list": [ 00:19:15.282 { 00:19:15.282 "name": "BaseBdev1", 00:19:15.282 "uuid": "78cf0c16-2a75-5f51-be16-4e05bdd0c946", 00:19:15.282 "is_configured": true, 00:19:15.282 "data_offset": 2048, 00:19:15.282 "data_size": 63488 00:19:15.282 }, 00:19:15.282 { 00:19:15.282 "name": "BaseBdev2", 00:19:15.282 "uuid": "248cf8ae-4ca0-265f-8a8f-c85b2e08274b", 00:19:15.282 "is_configured": true, 00:19:15.282 "data_offset": 2048, 00:19:15.282 "data_size": 63488 00:19:15.282 }, 00:19:15.282 { 00:19:15.282 "name": "BaseBdev3", 00:19:15.282 "uuid": "ea33d4fd-784a-a655-b425-d83cd3fafaca", 00:19:15.282 "is_configured": true, 00:19:15.282 "data_offset": 2048, 00:19:15.282 "data_size": 63488 00:19:15.282 }, 00:19:15.282 { 00:19:15.282 "name": "BaseBdev4", 00:19:15.282 "uuid": "8770e5c3-681e-c35d-a5c8-b85569cd33a5", 00:19:15.282 "is_configured": true, 00:19:15.282 "data_offset": 2048, 00:19:15.282 "data_size": 63488 00:19:15.282 } 00:19:15.282 ] 00:19:15.282 }' 00:19:15.282 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.282 06:32:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.541 06:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:15.840 [2024-07-23 06:32:28.139661] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.840 [2024-07-23 06:32:28.139691] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.840 [2024-07-23 06:32:28.140103] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.840 [2024-07-23 06:32:28.140115] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.841 [2024-07-23 06:32:28.140142] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.841 [2024-07-23 06:32:28.140147] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x4ef6a835900 name raid_bdev1, state offline 00:19:15.841 0 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65352 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 65352 ']' 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 65352 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65352 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:15.841 killing process with pid 65352 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65352' 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 65352 00:19:15.841 [2024-07-23 06:32:28.166824] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.841 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 65352 00:19:15.841 [2024-07-23 06:32:28.190459] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.7hGJmqAkfs 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:16.099 06:32:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:16.099 00:19:16.100 real 0m7.388s 00:19:16.100 user 0m11.617s 00:19:16.100 sys 0m1.425s 00:19:16.100 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.100 06:32:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 ************************************ 00:19:16.100 END TEST raid_read_error_test 00:19:16.100 ************************************ 00:19:16.100 06:32:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:16.100 06:32:28 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:19:16.100 06:32:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:16.100 06:32:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.100 06:32:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 ************************************ 00:19:16.100 START TEST raid_write_error_test 00:19:16.100 ************************************ 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.PrIVLRYFeg 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65490 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65490 /var/tmp/spdk-raid.sock 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 65490 ']' 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.100 06:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 [2024-07-23 06:32:28.442810] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:16.100 [2024-07-23 06:32:28.442966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:16.670 EAL: TSC is not safe to use in SMP mode 00:19:16.670 EAL: TSC is not invariant 00:19:16.670 [2024-07-23 06:32:28.989236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.670 [2024-07-23 06:32:29.091025] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:16.670 [2024-07-23 06:32:29.093152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.670 [2024-07-23 06:32:29.093952] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.670 [2024-07-23 06:32:29.093967] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.239 06:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.239 06:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:17.239 06:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:17.239 06:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:17.498 BaseBdev1_malloc 00:19:17.498 06:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:17.756 true 00:19:17.756 06:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:18.014 [2024-07-23 06:32:30.399706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:18.014 [2024-07-23 06:32:30.399771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.014 [2024-07-23 06:32:30.399798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x128b3e834780 00:19:18.014 [2024-07-23 06:32:30.399808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.014 [2024-07-23 06:32:30.400557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.014 [2024-07-23 06:32:30.400580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:18.014 BaseBdev1 00:19:18.014 06:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:18.014 06:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:18.282 BaseBdev2_malloc 00:19:18.282 06:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:18.555 true 00:19:18.555 06:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:18.813 [2024-07-23 06:32:31.243742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:18.813 [2024-07-23 06:32:31.243799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.813 [2024-07-23 06:32:31.243825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x128b3e834c80 00:19:18.813 [2024-07-23 06:32:31.243834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.813 [2024-07-23 06:32:31.244523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.813 [2024-07-23 06:32:31.244548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:18.813 BaseBdev2 00:19:18.813 06:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:18.813 06:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:19.071 BaseBdev3_malloc 00:19:19.071 06:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:19.330 true 00:19:19.330 06:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:19.589 [2024-07-23 06:32:32.071763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:19.589 [2024-07-23 06:32:32.071819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.589 [2024-07-23 06:32:32.071846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x128b3e835180 00:19:19.589 [2024-07-23 06:32:32.071854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.589 [2024-07-23 06:32:32.072533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.589 [2024-07-23 06:32:32.072558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:19.589 BaseBdev3 00:19:19.589 06:32:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:19.589 06:32:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:19.849 BaseBdev4_malloc 00:19:19.849 06:32:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:20.108 true 00:19:20.108 06:32:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:20.401 [2024-07-23 06:32:32.859778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:20.401 [2024-07-23 06:32:32.859850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.401 [2024-07-23 06:32:32.859878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x128b3e835680 00:19:20.401 [2024-07-23 06:32:32.859887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.401 [2024-07-23 06:32:32.860589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.401 [2024-07-23 06:32:32.860618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:20.401 BaseBdev4 00:19:20.401 06:32:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:20.660 [2024-07-23 06:32:33.099800] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.660 [2024-07-23 06:32:33.100388] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.660 [2024-07-23 06:32:33.100414] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.660 [2024-07-23 06:32:33.100428] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.660 [2024-07-23 06:32:33.100508] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x128b3e835900 00:19:20.660 [2024-07-23 06:32:33.100514] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:20.660 [2024-07-23 06:32:33.100550] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x128b3e8a0e20 00:19:20.660 [2024-07-23 06:32:33.100628] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x128b3e835900 00:19:20.660 [2024-07-23 06:32:33.100633] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x128b3e835900 00:19:20.660 [2024-07-23 06:32:33.100661] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.660 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.917 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.917 "name": "raid_bdev1", 00:19:20.917 "uuid": "57cf501d-48bd-11ef-a06c-59ddad71024c", 00:19:20.917 "strip_size_kb": 0, 00:19:20.917 "state": "online", 00:19:20.917 "raid_level": "raid1", 00:19:20.917 "superblock": true, 00:19:20.917 "num_base_bdevs": 4, 00:19:20.917 "num_base_bdevs_discovered": 4, 00:19:20.917 "num_base_bdevs_operational": 4, 00:19:20.917 "base_bdevs_list": [ 00:19:20.917 { 00:19:20.917 "name": "BaseBdev1", 00:19:20.917 "uuid": "b12fd5ab-1999-755d-a10e-62c317cd4dce", 00:19:20.917 "is_configured": true, 00:19:20.917 "data_offset": 2048, 00:19:20.917 "data_size": 63488 00:19:20.917 }, 00:19:20.917 { 00:19:20.917 "name": "BaseBdev2", 00:19:20.917 "uuid": "a515eccf-e916-ba5c-a156-3f9b7441f94a", 00:19:20.917 "is_configured": true, 00:19:20.917 "data_offset": 2048, 00:19:20.917 "data_size": 63488 00:19:20.917 }, 00:19:20.917 { 00:19:20.917 "name": "BaseBdev3", 00:19:20.917 "uuid": "5f48386e-0ab8-7d54-97bb-82f802210181", 00:19:20.917 "is_configured": true, 00:19:20.917 "data_offset": 2048, 00:19:20.917 "data_size": 63488 00:19:20.917 }, 00:19:20.917 { 00:19:20.917 "name": "BaseBdev4", 00:19:20.917 "uuid": "c3bc0dcd-6305-305d-b275-ae1e6b00bf2d", 00:19:20.917 "is_configured": true, 00:19:20.917 "data_offset": 2048, 00:19:20.917 "data_size": 63488 00:19:20.917 } 00:19:20.917 ] 00:19:20.917 }' 00:19:20.917 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.917 06:32:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.482 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:21.482 06:32:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:21.482 [2024-07-23 06:32:33.827977] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x128b3e8a0ec0 00:19:22.418 06:32:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:22.677 [2024-07-23 06:32:35.033318] bdev_raid.c:2248:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:22.677 [2024-07-23 06:32:35.033374] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:22.677 [2024-07-23 06:32:35.033508] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x128b3e8a0ec0 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:22.677 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.678 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.936 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.936 "name": "raid_bdev1", 00:19:22.936 "uuid": "57cf501d-48bd-11ef-a06c-59ddad71024c", 00:19:22.936 "strip_size_kb": 0, 00:19:22.936 "state": "online", 00:19:22.936 "raid_level": "raid1", 00:19:22.936 "superblock": true, 00:19:22.936 "num_base_bdevs": 4, 00:19:22.936 "num_base_bdevs_discovered": 3, 00:19:22.936 "num_base_bdevs_operational": 3, 00:19:22.936 "base_bdevs_list": [ 00:19:22.936 { 00:19:22.936 "name": null, 00:19:22.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.936 "is_configured": false, 00:19:22.936 "data_offset": 2048, 00:19:22.936 "data_size": 63488 00:19:22.936 }, 00:19:22.936 { 00:19:22.936 "name": "BaseBdev2", 00:19:22.936 "uuid": "a515eccf-e916-ba5c-a156-3f9b7441f94a", 00:19:22.936 "is_configured": true, 00:19:22.936 "data_offset": 2048, 00:19:22.936 "data_size": 63488 00:19:22.936 }, 00:19:22.936 { 00:19:22.936 "name": "BaseBdev3", 00:19:22.936 "uuid": "5f48386e-0ab8-7d54-97bb-82f802210181", 00:19:22.936 "is_configured": true, 00:19:22.936 "data_offset": 2048, 00:19:22.936 "data_size": 63488 00:19:22.936 }, 00:19:22.936 { 00:19:22.936 "name": "BaseBdev4", 00:19:22.936 "uuid": "c3bc0dcd-6305-305d-b275-ae1e6b00bf2d", 00:19:22.936 "is_configured": true, 00:19:22.936 "data_offset": 2048, 00:19:22.936 "data_size": 63488 00:19:22.936 } 00:19:22.936 ] 00:19:22.936 }' 00:19:22.936 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.936 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.195 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:23.454 [2024-07-23 06:32:35.936211] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.454 [2024-07-23 06:32:35.936238] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.454 [2024-07-23 06:32:35.936568] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.454 [2024-07-23 06:32:35.936579] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.454 [2024-07-23 06:32:35.936595] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.454 [2024-07-23 06:32:35.936600] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128b3e835900 name raid_bdev1, state offline 00:19:23.454 0 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65490 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 65490 ']' 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 65490 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65490 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:23.454 killing process with pid 65490 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65490' 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 65490 00:19:23.454 [2024-07-23 06:32:35.970332] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.454 06:32:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 65490 00:19:23.713 [2024-07-23 06:32:35.993591] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.713 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:23.713 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.PrIVLRYFeg 00:19:23.713 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:23.713 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:19:23.713 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:19:23.713 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:23.714 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:23.714 06:32:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:23.714 00:19:23.714 real 0m7.765s 00:19:23.714 user 0m12.502s 00:19:23.714 sys 0m1.260s 00:19:23.714 06:32:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.714 ************************************ 00:19:23.714 END TEST raid_write_error_test 00:19:23.714 ************************************ 00:19:23.714 06:32:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.714 06:32:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:23.714 06:32:36 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:19:23.714 06:32:36 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:19:23.714 06:32:36 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:19:23.714 06:32:36 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:23.714 06:32:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:23.714 06:32:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.714 06:32:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.714 ************************************ 00:19:23.714 START TEST raid_state_function_test_sb_4k 00:19:23.714 ************************************ 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65626 00:19:23.714 Process raid pid: 65626 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65626' 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65626 /var/tmp/spdk-raid.sock 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65626 ']' 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.714 06:32:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 [2024-07-23 06:32:36.245858] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:23.973 [2024-07-23 06:32:36.246130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:24.539 EAL: TSC is not safe to use in SMP mode 00:19:24.539 EAL: TSC is not invariant 00:19:24.539 [2024-07-23 06:32:36.809960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.539 [2024-07-23 06:32:36.900546] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:24.539 [2024-07-23 06:32:36.902652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.539 [2024-07-23 06:32:36.903411] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.539 [2024-07-23 06:32:36.903428] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.103 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.103 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:19:25.103 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:25.362 [2024-07-23 06:32:37.632094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.362 [2024-07-23 06:32:37.632150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.362 [2024-07-23 06:32:37.632156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.362 [2024-07-23 06:32:37.632165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.362 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.620 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.620 "name": "Existed_Raid", 00:19:25.620 "uuid": "5a82e359-48bd-11ef-a06c-59ddad71024c", 00:19:25.620 "strip_size_kb": 0, 00:19:25.620 "state": "configuring", 00:19:25.620 "raid_level": "raid1", 00:19:25.620 "superblock": true, 00:19:25.620 "num_base_bdevs": 2, 00:19:25.620 "num_base_bdevs_discovered": 0, 00:19:25.620 "num_base_bdevs_operational": 2, 00:19:25.620 "base_bdevs_list": [ 00:19:25.620 { 00:19:25.620 "name": "BaseBdev1", 00:19:25.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.620 "is_configured": false, 00:19:25.620 "data_offset": 0, 00:19:25.620 "data_size": 0 00:19:25.620 }, 00:19:25.620 { 00:19:25.621 "name": "BaseBdev2", 00:19:25.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.621 "is_configured": false, 00:19:25.621 "data_offset": 0, 00:19:25.621 "data_size": 0 00:19:25.621 } 00:19:25.621 ] 00:19:25.621 }' 00:19:25.621 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.621 06:32:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.878 06:32:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:26.136 [2024-07-23 06:32:38.452092] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.136 [2024-07-23 06:32:38.452121] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a7e24834500 name Existed_Raid, state configuring 00:19:26.136 06:32:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:26.393 [2024-07-23 06:32:38.708118] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.393 [2024-07-23 06:32:38.708172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.393 [2024-07-23 06:32:38.708178] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.393 [2024-07-23 06:32:38.708187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.393 06:32:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:19:26.651 [2024-07-23 06:32:39.009153] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.651 BaseBdev1 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:26.651 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:26.909 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:27.168 [ 00:19:27.168 { 00:19:27.168 "name": "BaseBdev1", 00:19:27.168 "aliases": [ 00:19:27.168 "5b54da50-48bd-11ef-a06c-59ddad71024c" 00:19:27.168 ], 00:19:27.168 "product_name": "Malloc disk", 00:19:27.168 "block_size": 4096, 00:19:27.168 "num_blocks": 8192, 00:19:27.168 "uuid": "5b54da50-48bd-11ef-a06c-59ddad71024c", 00:19:27.168 "assigned_rate_limits": { 00:19:27.168 "rw_ios_per_sec": 0, 00:19:27.168 "rw_mbytes_per_sec": 0, 00:19:27.168 "r_mbytes_per_sec": 0, 00:19:27.168 "w_mbytes_per_sec": 0 00:19:27.168 }, 00:19:27.168 "claimed": true, 00:19:27.168 "claim_type": "exclusive_write", 00:19:27.168 "zoned": false, 00:19:27.168 "supported_io_types": { 00:19:27.168 "read": true, 00:19:27.168 "write": true, 00:19:27.168 "unmap": true, 00:19:27.168 "flush": true, 00:19:27.168 "reset": true, 00:19:27.168 "nvme_admin": false, 00:19:27.168 "nvme_io": false, 00:19:27.168 "nvme_io_md": false, 00:19:27.168 "write_zeroes": true, 00:19:27.168 "zcopy": true, 00:19:27.168 "get_zone_info": false, 00:19:27.168 "zone_management": false, 00:19:27.168 "zone_append": false, 00:19:27.168 "compare": false, 00:19:27.168 "compare_and_write": false, 00:19:27.168 "abort": true, 00:19:27.168 "seek_hole": false, 00:19:27.168 "seek_data": false, 00:19:27.168 "copy": true, 00:19:27.168 "nvme_iov_md": false 00:19:27.168 }, 00:19:27.168 "memory_domains": [ 00:19:27.168 { 00:19:27.168 "dma_device_id": "system", 00:19:27.168 "dma_device_type": 1 00:19:27.168 }, 00:19:27.168 { 00:19:27.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.168 "dma_device_type": 2 00:19:27.168 } 00:19:27.168 ], 00:19:27.168 "driver_specific": {} 00:19:27.168 } 00:19:27.168 ] 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.168 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.427 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.427 "name": "Existed_Raid", 00:19:27.427 "uuid": "5b271338-48bd-11ef-a06c-59ddad71024c", 00:19:27.427 "strip_size_kb": 0, 00:19:27.427 "state": "configuring", 00:19:27.427 "raid_level": "raid1", 00:19:27.427 "superblock": true, 00:19:27.427 "num_base_bdevs": 2, 00:19:27.427 "num_base_bdevs_discovered": 1, 00:19:27.427 "num_base_bdevs_operational": 2, 00:19:27.427 "base_bdevs_list": [ 00:19:27.427 { 00:19:27.427 "name": "BaseBdev1", 00:19:27.427 "uuid": "5b54da50-48bd-11ef-a06c-59ddad71024c", 00:19:27.427 "is_configured": true, 00:19:27.427 "data_offset": 256, 00:19:27.427 "data_size": 7936 00:19:27.427 }, 00:19:27.427 { 00:19:27.427 "name": "BaseBdev2", 00:19:27.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.427 "is_configured": false, 00:19:27.427 "data_offset": 0, 00:19:27.427 "data_size": 0 00:19:27.427 } 00:19:27.427 ] 00:19:27.427 }' 00:19:27.427 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.427 06:32:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.685 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:27.944 [2024-07-23 06:32:40.280187] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.944 [2024-07-23 06:32:40.280236] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a7e24834500 name Existed_Raid, state configuring 00:19:27.944 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:28.310 [2024-07-23 06:32:40.516236] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.310 [2024-07-23 06:32:40.517115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.310 [2024-07-23 06:32:40.517154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.310 "name": "Existed_Raid", 00:19:28.310 "uuid": "5c3af923-48bd-11ef-a06c-59ddad71024c", 00:19:28.310 "strip_size_kb": 0, 00:19:28.310 "state": "configuring", 00:19:28.310 "raid_level": "raid1", 00:19:28.310 "superblock": true, 00:19:28.310 "num_base_bdevs": 2, 00:19:28.310 "num_base_bdevs_discovered": 1, 00:19:28.310 "num_base_bdevs_operational": 2, 00:19:28.310 "base_bdevs_list": [ 00:19:28.310 { 00:19:28.310 "name": "BaseBdev1", 00:19:28.310 "uuid": "5b54da50-48bd-11ef-a06c-59ddad71024c", 00:19:28.310 "is_configured": true, 00:19:28.310 "data_offset": 256, 00:19:28.310 "data_size": 7936 00:19:28.310 }, 00:19:28.310 { 00:19:28.310 "name": "BaseBdev2", 00:19:28.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.310 "is_configured": false, 00:19:28.310 "data_offset": 0, 00:19:28.310 "data_size": 0 00:19:28.310 } 00:19:28.310 ] 00:19:28.310 }' 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.310 06:32:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.583 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:19:28.841 [2024-07-23 06:32:41.288426] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.841 [2024-07-23 06:32:41.288503] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a7e24834a00 00:19:28.841 [2024-07-23 06:32:41.288509] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:28.841 [2024-07-23 06:32:41.288543] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a7e24897e20 00:19:28.841 [2024-07-23 06:32:41.288588] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a7e24834a00 00:19:28.841 [2024-07-23 06:32:41.288592] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2a7e24834a00 00:19:28.841 [2024-07-23 06:32:41.288627] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.841 BaseBdev2 00:19:28.841 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:28.841 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:28.841 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:28.841 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:19:28.842 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:28.842 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:28.842 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.100 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:29.358 [ 00:19:29.358 { 00:19:29.358 "name": "BaseBdev2", 00:19:29.358 "aliases": [ 00:19:29.358 "5cb0c850-48bd-11ef-a06c-59ddad71024c" 00:19:29.358 ], 00:19:29.358 "product_name": "Malloc disk", 00:19:29.358 "block_size": 4096, 00:19:29.358 "num_blocks": 8192, 00:19:29.358 "uuid": "5cb0c850-48bd-11ef-a06c-59ddad71024c", 00:19:29.358 "assigned_rate_limits": { 00:19:29.358 "rw_ios_per_sec": 0, 00:19:29.358 "rw_mbytes_per_sec": 0, 00:19:29.358 "r_mbytes_per_sec": 0, 00:19:29.358 "w_mbytes_per_sec": 0 00:19:29.358 }, 00:19:29.358 "claimed": true, 00:19:29.358 "claim_type": "exclusive_write", 00:19:29.358 "zoned": false, 00:19:29.358 "supported_io_types": { 00:19:29.358 "read": true, 00:19:29.358 "write": true, 00:19:29.358 "unmap": true, 00:19:29.359 "flush": true, 00:19:29.359 "reset": true, 00:19:29.359 "nvme_admin": false, 00:19:29.359 "nvme_io": false, 00:19:29.359 "nvme_io_md": false, 00:19:29.359 "write_zeroes": true, 00:19:29.359 "zcopy": true, 00:19:29.359 "get_zone_info": false, 00:19:29.359 "zone_management": false, 00:19:29.359 "zone_append": false, 00:19:29.359 "compare": false, 00:19:29.359 "compare_and_write": false, 00:19:29.359 "abort": true, 00:19:29.359 "seek_hole": false, 00:19:29.359 "seek_data": false, 00:19:29.359 "copy": true, 00:19:29.359 "nvme_iov_md": false 00:19:29.359 }, 00:19:29.359 "memory_domains": [ 00:19:29.359 { 00:19:29.359 "dma_device_id": "system", 00:19:29.359 "dma_device_type": 1 00:19:29.359 }, 00:19:29.359 { 00:19:29.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.359 "dma_device_type": 2 00:19:29.359 } 00:19:29.359 ], 00:19:29.359 "driver_specific": {} 00:19:29.359 } 00:19:29.359 ] 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.618 06:32:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.618 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.618 "name": "Existed_Raid", 00:19:29.618 "uuid": "5c3af923-48bd-11ef-a06c-59ddad71024c", 00:19:29.618 "strip_size_kb": 0, 00:19:29.618 "state": "online", 00:19:29.618 "raid_level": "raid1", 00:19:29.618 "superblock": true, 00:19:29.618 "num_base_bdevs": 2, 00:19:29.618 "num_base_bdevs_discovered": 2, 00:19:29.618 "num_base_bdevs_operational": 2, 00:19:29.618 "base_bdevs_list": [ 00:19:29.618 { 00:19:29.618 "name": "BaseBdev1", 00:19:29.618 "uuid": "5b54da50-48bd-11ef-a06c-59ddad71024c", 00:19:29.618 "is_configured": true, 00:19:29.618 "data_offset": 256, 00:19:29.618 "data_size": 7936 00:19:29.618 }, 00:19:29.618 { 00:19:29.618 "name": "BaseBdev2", 00:19:29.618 "uuid": "5cb0c850-48bd-11ef-a06c-59ddad71024c", 00:19:29.618 "is_configured": true, 00:19:29.618 "data_offset": 256, 00:19:29.618 "data_size": 7936 00:19:29.618 } 00:19:29.618 ] 00:19:29.618 }' 00:19:29.618 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.618 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:30.185 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:30.444 [2024-07-23 06:32:42.716542] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.444 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:30.444 "name": "Existed_Raid", 00:19:30.444 "aliases": [ 00:19:30.444 "5c3af923-48bd-11ef-a06c-59ddad71024c" 00:19:30.444 ], 00:19:30.444 "product_name": "Raid Volume", 00:19:30.444 "block_size": 4096, 00:19:30.444 "num_blocks": 7936, 00:19:30.444 "uuid": "5c3af923-48bd-11ef-a06c-59ddad71024c", 00:19:30.444 "assigned_rate_limits": { 00:19:30.444 "rw_ios_per_sec": 0, 00:19:30.444 "rw_mbytes_per_sec": 0, 00:19:30.444 "r_mbytes_per_sec": 0, 00:19:30.444 "w_mbytes_per_sec": 0 00:19:30.444 }, 00:19:30.444 "claimed": false, 00:19:30.444 "zoned": false, 00:19:30.444 "supported_io_types": { 00:19:30.444 "read": true, 00:19:30.444 "write": true, 00:19:30.444 "unmap": false, 00:19:30.444 "flush": false, 00:19:30.444 "reset": true, 00:19:30.444 "nvme_admin": false, 00:19:30.444 "nvme_io": false, 00:19:30.444 "nvme_io_md": false, 00:19:30.444 "write_zeroes": true, 00:19:30.444 "zcopy": false, 00:19:30.444 "get_zone_info": false, 00:19:30.444 "zone_management": false, 00:19:30.444 "zone_append": false, 00:19:30.444 "compare": false, 00:19:30.444 "compare_and_write": false, 00:19:30.444 "abort": false, 00:19:30.444 "seek_hole": false, 00:19:30.444 "seek_data": false, 00:19:30.444 "copy": false, 00:19:30.444 "nvme_iov_md": false 00:19:30.444 }, 00:19:30.444 "memory_domains": [ 00:19:30.444 { 00:19:30.444 "dma_device_id": "system", 00:19:30.444 "dma_device_type": 1 00:19:30.444 }, 00:19:30.444 { 00:19:30.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.444 "dma_device_type": 2 00:19:30.444 }, 00:19:30.444 { 00:19:30.444 "dma_device_id": "system", 00:19:30.444 "dma_device_type": 1 00:19:30.444 }, 00:19:30.444 { 00:19:30.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.444 "dma_device_type": 2 00:19:30.444 } 00:19:30.444 ], 00:19:30.444 "driver_specific": { 00:19:30.444 "raid": { 00:19:30.444 "uuid": "5c3af923-48bd-11ef-a06c-59ddad71024c", 00:19:30.444 "strip_size_kb": 0, 00:19:30.444 "state": "online", 00:19:30.444 "raid_level": "raid1", 00:19:30.444 "superblock": true, 00:19:30.444 "num_base_bdevs": 2, 00:19:30.444 "num_base_bdevs_discovered": 2, 00:19:30.444 "num_base_bdevs_operational": 2, 00:19:30.444 "base_bdevs_list": [ 00:19:30.444 { 00:19:30.444 "name": "BaseBdev1", 00:19:30.444 "uuid": "5b54da50-48bd-11ef-a06c-59ddad71024c", 00:19:30.444 "is_configured": true, 00:19:30.444 "data_offset": 256, 00:19:30.444 "data_size": 7936 00:19:30.444 }, 00:19:30.444 { 00:19:30.444 "name": "BaseBdev2", 00:19:30.444 "uuid": "5cb0c850-48bd-11ef-a06c-59ddad71024c", 00:19:30.444 "is_configured": true, 00:19:30.444 "data_offset": 256, 00:19:30.444 "data_size": 7936 00:19:30.444 } 00:19:30.444 ] 00:19:30.444 } 00:19:30.444 } 00:19:30.444 }' 00:19:30.444 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.444 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:30.444 BaseBdev2' 00:19:30.444 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.444 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:30.444 06:32:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.702 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.702 "name": "BaseBdev1", 00:19:30.703 "aliases": [ 00:19:30.703 "5b54da50-48bd-11ef-a06c-59ddad71024c" 00:19:30.703 ], 00:19:30.703 "product_name": "Malloc disk", 00:19:30.703 "block_size": 4096, 00:19:30.703 "num_blocks": 8192, 00:19:30.703 "uuid": "5b54da50-48bd-11ef-a06c-59ddad71024c", 00:19:30.703 "assigned_rate_limits": { 00:19:30.703 "rw_ios_per_sec": 0, 00:19:30.703 "rw_mbytes_per_sec": 0, 00:19:30.703 "r_mbytes_per_sec": 0, 00:19:30.703 "w_mbytes_per_sec": 0 00:19:30.703 }, 00:19:30.703 "claimed": true, 00:19:30.703 "claim_type": "exclusive_write", 00:19:30.703 "zoned": false, 00:19:30.703 "supported_io_types": { 00:19:30.703 "read": true, 00:19:30.703 "write": true, 00:19:30.703 "unmap": true, 00:19:30.703 "flush": true, 00:19:30.703 "reset": true, 00:19:30.703 "nvme_admin": false, 00:19:30.703 "nvme_io": false, 00:19:30.703 "nvme_io_md": false, 00:19:30.703 "write_zeroes": true, 00:19:30.703 "zcopy": true, 00:19:30.703 "get_zone_info": false, 00:19:30.703 "zone_management": false, 00:19:30.703 "zone_append": false, 00:19:30.703 "compare": false, 00:19:30.703 "compare_and_write": false, 00:19:30.703 "abort": true, 00:19:30.703 "seek_hole": false, 00:19:30.703 "seek_data": false, 00:19:30.703 "copy": true, 00:19:30.703 "nvme_iov_md": false 00:19:30.703 }, 00:19:30.703 "memory_domains": [ 00:19:30.703 { 00:19:30.703 "dma_device_id": "system", 00:19:30.703 "dma_device_type": 1 00:19:30.703 }, 00:19:30.703 { 00:19:30.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.703 "dma_device_type": 2 00:19:30.703 } 00:19:30.703 ], 00:19:30.703 "driver_specific": {} 00:19:30.703 }' 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:30.703 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.967 "name": "BaseBdev2", 00:19:30.967 "aliases": [ 00:19:30.967 "5cb0c850-48bd-11ef-a06c-59ddad71024c" 00:19:30.967 ], 00:19:30.967 "product_name": "Malloc disk", 00:19:30.967 "block_size": 4096, 00:19:30.967 "num_blocks": 8192, 00:19:30.967 "uuid": "5cb0c850-48bd-11ef-a06c-59ddad71024c", 00:19:30.967 "assigned_rate_limits": { 00:19:30.967 "rw_ios_per_sec": 0, 00:19:30.967 "rw_mbytes_per_sec": 0, 00:19:30.967 "r_mbytes_per_sec": 0, 00:19:30.967 "w_mbytes_per_sec": 0 00:19:30.967 }, 00:19:30.967 "claimed": true, 00:19:30.967 "claim_type": "exclusive_write", 00:19:30.967 "zoned": false, 00:19:30.967 "supported_io_types": { 00:19:30.967 "read": true, 00:19:30.967 "write": true, 00:19:30.967 "unmap": true, 00:19:30.967 "flush": true, 00:19:30.967 "reset": true, 00:19:30.967 "nvme_admin": false, 00:19:30.967 "nvme_io": false, 00:19:30.967 "nvme_io_md": false, 00:19:30.967 "write_zeroes": true, 00:19:30.967 "zcopy": true, 00:19:30.967 "get_zone_info": false, 00:19:30.967 "zone_management": false, 00:19:30.967 "zone_append": false, 00:19:30.967 "compare": false, 00:19:30.967 "compare_and_write": false, 00:19:30.967 "abort": true, 00:19:30.967 "seek_hole": false, 00:19:30.967 "seek_data": false, 00:19:30.967 "copy": true, 00:19:30.967 "nvme_iov_md": false 00:19:30.967 }, 00:19:30.967 "memory_domains": [ 00:19:30.967 { 00:19:30.967 "dma_device_id": "system", 00:19:30.967 "dma_device_type": 1 00:19:30.967 }, 00:19:30.967 { 00:19:30.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.967 "dma_device_type": 2 00:19:30.967 } 00:19:30.967 ], 00:19:30.967 "driver_specific": {} 00:19:30.967 }' 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.967 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:31.225 [2024-07-23 06:32:43.616550] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.225 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.483 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.483 "name": "Existed_Raid", 00:19:31.483 "uuid": "5c3af923-48bd-11ef-a06c-59ddad71024c", 00:19:31.483 "strip_size_kb": 0, 00:19:31.483 "state": "online", 00:19:31.483 "raid_level": "raid1", 00:19:31.483 "superblock": true, 00:19:31.483 "num_base_bdevs": 2, 00:19:31.483 "num_base_bdevs_discovered": 1, 00:19:31.483 "num_base_bdevs_operational": 1, 00:19:31.483 "base_bdevs_list": [ 00:19:31.483 { 00:19:31.483 "name": null, 00:19:31.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.483 "is_configured": false, 00:19:31.483 "data_offset": 256, 00:19:31.483 "data_size": 7936 00:19:31.483 }, 00:19:31.483 { 00:19:31.483 "name": "BaseBdev2", 00:19:31.483 "uuid": "5cb0c850-48bd-11ef-a06c-59ddad71024c", 00:19:31.483 "is_configured": true, 00:19:31.483 "data_offset": 256, 00:19:31.483 "data_size": 7936 00:19:31.483 } 00:19:31.483 ] 00:19:31.483 }' 00:19:31.483 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.483 06:32:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.741 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:31.741 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:31.741 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.741 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:31.998 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:31.998 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.998 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:32.257 [2024-07-23 06:32:44.743191] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:32.257 [2024-07-23 06:32:44.743234] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.257 [2024-07-23 06:32:44.749473] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.257 [2024-07-23 06:32:44.749492] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.257 [2024-07-23 06:32:44.749497] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a7e24834a00 name Existed_Raid, state offline 00:19:32.257 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:32.257 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:32.257 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.257 06:32:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65626 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65626 ']' 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65626 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65626 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:32.515 killing process with pid 65626 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65626' 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65626 00:19:32.515 [2024-07-23 06:32:45.023116] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.515 [2024-07-23 06:32:45.023150] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.515 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65626 00:19:32.774 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:19:32.774 00:19:32.774 real 0m8.969s 00:19:32.774 user 0m15.534s 00:19:32.774 sys 0m1.636s 00:19:32.774 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.774 06:32:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.774 ************************************ 00:19:32.774 END TEST raid_state_function_test_sb_4k 00:19:32.774 ************************************ 00:19:32.774 06:32:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:32.774 06:32:45 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:32.774 06:32:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:32.774 06:32:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.774 06:32:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.774 ************************************ 00:19:32.774 START TEST raid_superblock_test_4k 00:19:32.774 ************************************ 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65896 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65896 /var/tmp/spdk-raid.sock 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65896 ']' 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.774 06:32:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.774 [2024-07-23 06:32:45.260060] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:32.774 [2024-07-23 06:32:45.260273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:33.340 EAL: TSC is not safe to use in SMP mode 00:19:33.340 EAL: TSC is not invariant 00:19:33.340 [2024-07-23 06:32:45.820411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.597 [2024-07-23 06:32:45.908917] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:33.597 [2024-07-23 06:32:45.911220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.597 [2024-07-23 06:32:45.912060] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.597 [2024-07-23 06:32:45.912073] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.854 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:19:34.112 malloc1 00:19:34.112 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.371 [2024-07-23 06:32:46.767554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.371 [2024-07-23 06:32:46.767621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.371 [2024-07-23 06:32:46.767660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1804f8634780 00:19:34.371 [2024-07-23 06:32:46.767668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.371 [2024-07-23 06:32:46.768616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.371 [2024-07-23 06:32:46.768641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.371 pt1 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.371 06:32:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:19:34.629 malloc2 00:19:34.629 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.887 [2024-07-23 06:32:47.275616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.887 [2024-07-23 06:32:47.275680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.887 [2024-07-23 06:32:47.275706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1804f8634c80 00:19:34.887 [2024-07-23 06:32:47.275714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.887 [2024-07-23 06:32:47.276435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.887 [2024-07-23 06:32:47.276458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.887 pt2 00:19:34.887 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:34.887 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:34.887 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:19:35.145 [2024-07-23 06:32:47.551637] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:35.145 [2024-07-23 06:32:47.552302] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.145 [2024-07-23 06:32:47.552374] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1804f8634f00 00:19:35.145 [2024-07-23 06:32:47.552395] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.145 [2024-07-23 06:32:47.552445] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1804f8697e20 00:19:35.145 [2024-07-23 06:32:47.552533] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1804f8634f00 00:19:35.145 [2024-07-23 06:32:47.552537] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1804f8634f00 00:19:35.145 [2024-07-23 06:32:47.552564] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.145 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.403 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.403 "name": "raid_bdev1", 00:19:35.403 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:35.403 "strip_size_kb": 0, 00:19:35.403 "state": "online", 00:19:35.403 "raid_level": "raid1", 00:19:35.403 "superblock": true, 00:19:35.403 "num_base_bdevs": 2, 00:19:35.403 "num_base_bdevs_discovered": 2, 00:19:35.403 "num_base_bdevs_operational": 2, 00:19:35.403 "base_bdevs_list": [ 00:19:35.403 { 00:19:35.403 "name": "pt1", 00:19:35.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.403 "is_configured": true, 00:19:35.403 "data_offset": 256, 00:19:35.403 "data_size": 7936 00:19:35.403 }, 00:19:35.403 { 00:19:35.403 "name": "pt2", 00:19:35.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.403 "is_configured": true, 00:19:35.403 "data_offset": 256, 00:19:35.403 "data_size": 7936 00:19:35.403 } 00:19:35.403 ] 00:19:35.403 }' 00:19:35.403 06:32:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.403 06:32:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:35.661 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:35.919 [2024-07-23 06:32:48.351739] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.919 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:35.919 "name": "raid_bdev1", 00:19:35.919 "aliases": [ 00:19:35.919 "606c7d7e-48bd-11ef-a06c-59ddad71024c" 00:19:35.919 ], 00:19:35.919 "product_name": "Raid Volume", 00:19:35.919 "block_size": 4096, 00:19:35.919 "num_blocks": 7936, 00:19:35.919 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:35.919 "assigned_rate_limits": { 00:19:35.919 "rw_ios_per_sec": 0, 00:19:35.919 "rw_mbytes_per_sec": 0, 00:19:35.919 "r_mbytes_per_sec": 0, 00:19:35.919 "w_mbytes_per_sec": 0 00:19:35.919 }, 00:19:35.919 "claimed": false, 00:19:35.919 "zoned": false, 00:19:35.919 "supported_io_types": { 00:19:35.919 "read": true, 00:19:35.919 "write": true, 00:19:35.919 "unmap": false, 00:19:35.919 "flush": false, 00:19:35.919 "reset": true, 00:19:35.919 "nvme_admin": false, 00:19:35.919 "nvme_io": false, 00:19:35.919 "nvme_io_md": false, 00:19:35.919 "write_zeroes": true, 00:19:35.919 "zcopy": false, 00:19:35.919 "get_zone_info": false, 00:19:35.919 "zone_management": false, 00:19:35.919 "zone_append": false, 00:19:35.919 "compare": false, 00:19:35.919 "compare_and_write": false, 00:19:35.919 "abort": false, 00:19:35.919 "seek_hole": false, 00:19:35.919 "seek_data": false, 00:19:35.919 "copy": false, 00:19:35.919 "nvme_iov_md": false 00:19:35.919 }, 00:19:35.919 "memory_domains": [ 00:19:35.919 { 00:19:35.919 "dma_device_id": "system", 00:19:35.919 "dma_device_type": 1 00:19:35.919 }, 00:19:35.919 { 00:19:35.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.919 "dma_device_type": 2 00:19:35.919 }, 00:19:35.919 { 00:19:35.919 "dma_device_id": "system", 00:19:35.919 "dma_device_type": 1 00:19:35.919 }, 00:19:35.919 { 00:19:35.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.919 "dma_device_type": 2 00:19:35.919 } 00:19:35.919 ], 00:19:35.919 "driver_specific": { 00:19:35.919 "raid": { 00:19:35.919 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:35.919 "strip_size_kb": 0, 00:19:35.919 "state": "online", 00:19:35.919 "raid_level": "raid1", 00:19:35.919 "superblock": true, 00:19:35.919 "num_base_bdevs": 2, 00:19:35.919 "num_base_bdevs_discovered": 2, 00:19:35.919 "num_base_bdevs_operational": 2, 00:19:35.919 "base_bdevs_list": [ 00:19:35.919 { 00:19:35.919 "name": "pt1", 00:19:35.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.919 "is_configured": true, 00:19:35.920 "data_offset": 256, 00:19:35.920 "data_size": 7936 00:19:35.920 }, 00:19:35.920 { 00:19:35.920 "name": "pt2", 00:19:35.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.920 "is_configured": true, 00:19:35.920 "data_offset": 256, 00:19:35.920 "data_size": 7936 00:19:35.920 } 00:19:35.920 ] 00:19:35.920 } 00:19:35.920 } 00:19:35.920 }' 00:19:35.920 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.920 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:35.920 pt2' 00:19:35.920 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.920 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:35.920 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.178 "name": "pt1", 00:19:36.178 "aliases": [ 00:19:36.178 "00000000-0000-0000-0000-000000000001" 00:19:36.178 ], 00:19:36.178 "product_name": "passthru", 00:19:36.178 "block_size": 4096, 00:19:36.178 "num_blocks": 8192, 00:19:36.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.178 "assigned_rate_limits": { 00:19:36.178 "rw_ios_per_sec": 0, 00:19:36.178 "rw_mbytes_per_sec": 0, 00:19:36.178 "r_mbytes_per_sec": 0, 00:19:36.178 "w_mbytes_per_sec": 0 00:19:36.178 }, 00:19:36.178 "claimed": true, 00:19:36.178 "claim_type": "exclusive_write", 00:19:36.178 "zoned": false, 00:19:36.178 "supported_io_types": { 00:19:36.178 "read": true, 00:19:36.178 "write": true, 00:19:36.178 "unmap": true, 00:19:36.178 "flush": true, 00:19:36.178 "reset": true, 00:19:36.178 "nvme_admin": false, 00:19:36.178 "nvme_io": false, 00:19:36.178 "nvme_io_md": false, 00:19:36.178 "write_zeroes": true, 00:19:36.178 "zcopy": true, 00:19:36.178 "get_zone_info": false, 00:19:36.178 "zone_management": false, 00:19:36.178 "zone_append": false, 00:19:36.178 "compare": false, 00:19:36.178 "compare_and_write": false, 00:19:36.178 "abort": true, 00:19:36.178 "seek_hole": false, 00:19:36.178 "seek_data": false, 00:19:36.178 "copy": true, 00:19:36.178 "nvme_iov_md": false 00:19:36.178 }, 00:19:36.178 "memory_domains": [ 00:19:36.178 { 00:19:36.178 "dma_device_id": "system", 00:19:36.178 "dma_device_type": 1 00:19:36.178 }, 00:19:36.178 { 00:19:36.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.178 "dma_device_type": 2 00:19:36.178 } 00:19:36.178 ], 00:19:36.178 "driver_specific": { 00:19:36.178 "passthru": { 00:19:36.178 "name": "pt1", 00:19:36.178 "base_bdev_name": "malloc1" 00:19:36.178 } 00:19:36.178 } 00:19:36.178 }' 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:36.178 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.436 "name": "pt2", 00:19:36.436 "aliases": [ 00:19:36.436 "00000000-0000-0000-0000-000000000002" 00:19:36.436 ], 00:19:36.436 "product_name": "passthru", 00:19:36.436 "block_size": 4096, 00:19:36.436 "num_blocks": 8192, 00:19:36.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.436 "assigned_rate_limits": { 00:19:36.436 "rw_ios_per_sec": 0, 00:19:36.436 "rw_mbytes_per_sec": 0, 00:19:36.436 "r_mbytes_per_sec": 0, 00:19:36.436 "w_mbytes_per_sec": 0 00:19:36.436 }, 00:19:36.436 "claimed": true, 00:19:36.436 "claim_type": "exclusive_write", 00:19:36.436 "zoned": false, 00:19:36.436 "supported_io_types": { 00:19:36.436 "read": true, 00:19:36.436 "write": true, 00:19:36.436 "unmap": true, 00:19:36.436 "flush": true, 00:19:36.436 "reset": true, 00:19:36.436 "nvme_admin": false, 00:19:36.436 "nvme_io": false, 00:19:36.436 "nvme_io_md": false, 00:19:36.436 "write_zeroes": true, 00:19:36.436 "zcopy": true, 00:19:36.436 "get_zone_info": false, 00:19:36.436 "zone_management": false, 00:19:36.436 "zone_append": false, 00:19:36.436 "compare": false, 00:19:36.436 "compare_and_write": false, 00:19:36.436 "abort": true, 00:19:36.436 "seek_hole": false, 00:19:36.436 "seek_data": false, 00:19:36.436 "copy": true, 00:19:36.436 "nvme_iov_md": false 00:19:36.436 }, 00:19:36.436 "memory_domains": [ 00:19:36.436 { 00:19:36.436 "dma_device_id": "system", 00:19:36.436 "dma_device_type": 1 00:19:36.436 }, 00:19:36.436 { 00:19:36.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.436 "dma_device_type": 2 00:19:36.436 } 00:19:36.436 ], 00:19:36.436 "driver_specific": { 00:19:36.436 "passthru": { 00:19:36.436 "name": "pt2", 00:19:36.436 "base_bdev_name": "malloc2" 00:19:36.436 } 00:19:36.436 } 00:19:36.436 }' 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:36.436 06:32:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:36.693 [2024-07-23 06:32:49.163801] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.693 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=606c7d7e-48bd-11ef-a06c-59ddad71024c 00:19:36.693 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 606c7d7e-48bd-11ef-a06c-59ddad71024c ']' 00:19:36.693 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:36.951 [2024-07-23 06:32:49.395765] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.951 [2024-07-23 06:32:49.395786] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.951 [2024-07-23 06:32:49.395841] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.951 [2024-07-23 06:32:49.395856] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.951 [2024-07-23 06:32:49.395861] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1804f8634f00 name raid_bdev1, state offline 00:19:36.951 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.951 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:37.208 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:37.208 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:37.208 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.208 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:37.466 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.466 06:32:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:37.724 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:37.724 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:37.982 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:19:38.240 [2024-07-23 06:32:50.647884] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:38.240 [2024-07-23 06:32:50.648455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:38.240 [2024-07-23 06:32:50.648481] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:38.240 [2024-07-23 06:32:50.648516] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:38.240 [2024-07-23 06:32:50.648527] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.240 [2024-07-23 06:32:50.648531] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1804f8634c80 name raid_bdev1, state configuring 00:19:38.240 request: 00:19:38.240 { 00:19:38.240 "name": "raid_bdev1", 00:19:38.240 "raid_level": "raid1", 00:19:38.240 "base_bdevs": [ 00:19:38.240 "malloc1", 00:19:38.240 "malloc2" 00:19:38.240 ], 00:19:38.240 "superblock": false, 00:19:38.240 "method": "bdev_raid_create", 00:19:38.240 "req_id": 1 00:19:38.240 } 00:19:38.240 Got JSON-RPC error response 00:19:38.240 response: 00:19:38.240 { 00:19:38.240 "code": -17, 00:19:38.240 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:38.240 } 00:19:38.240 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:19:38.240 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:38.240 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:38.240 06:32:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:38.240 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.240 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:38.498 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:38.498 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:38.498 06:32:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.756 [2024-07-23 06:32:51.171940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.756 [2024-07-23 06:32:51.172004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.756 [2024-07-23 06:32:51.172031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1804f8634780 00:19:38.756 [2024-07-23 06:32:51.172043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.756 [2024-07-23 06:32:51.172733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.756 [2024-07-23 06:32:51.172757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.756 [2024-07-23 06:32:51.172782] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:38.756 [2024-07-23 06:32:51.172793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.756 pt1 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.756 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.014 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.014 "name": "raid_bdev1", 00:19:39.014 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:39.014 "strip_size_kb": 0, 00:19:39.014 "state": "configuring", 00:19:39.014 "raid_level": "raid1", 00:19:39.014 "superblock": true, 00:19:39.014 "num_base_bdevs": 2, 00:19:39.014 "num_base_bdevs_discovered": 1, 00:19:39.014 "num_base_bdevs_operational": 2, 00:19:39.014 "base_bdevs_list": [ 00:19:39.014 { 00:19:39.014 "name": "pt1", 00:19:39.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.014 "is_configured": true, 00:19:39.014 "data_offset": 256, 00:19:39.014 "data_size": 7936 00:19:39.014 }, 00:19:39.014 { 00:19:39.014 "name": null, 00:19:39.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.014 "is_configured": false, 00:19:39.014 "data_offset": 256, 00:19:39.014 "data_size": 7936 00:19:39.014 } 00:19:39.014 ] 00:19:39.014 }' 00:19:39.014 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.014 06:32:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.273 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:19:39.273 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:39.273 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:39.273 06:32:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:39.543 [2024-07-23 06:32:52.007995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:39.543 [2024-07-23 06:32:52.008063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.543 [2024-07-23 06:32:52.008091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1804f8634f00 00:19:39.543 [2024-07-23 06:32:52.008099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.543 [2024-07-23 06:32:52.008211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.543 [2024-07-23 06:32:52.008222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:39.543 [2024-07-23 06:32:52.008260] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:39.543 [2024-07-23 06:32:52.008268] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:39.543 [2024-07-23 06:32:52.008295] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1804f8635180 00:19:39.543 [2024-07-23 06:32:52.008299] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:39.543 [2024-07-23 06:32:52.008318] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1804f8697e20 00:19:39.543 [2024-07-23 06:32:52.008388] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1804f8635180 00:19:39.543 [2024-07-23 06:32:52.008392] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1804f8635180 00:19:39.543 [2024-07-23 06:32:52.008430] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.543 pt2 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.543 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.822 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.822 "name": "raid_bdev1", 00:19:39.822 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:39.822 "strip_size_kb": 0, 00:19:39.822 "state": "online", 00:19:39.822 "raid_level": "raid1", 00:19:39.822 "superblock": true, 00:19:39.822 "num_base_bdevs": 2, 00:19:39.822 "num_base_bdevs_discovered": 2, 00:19:39.822 "num_base_bdevs_operational": 2, 00:19:39.822 "base_bdevs_list": [ 00:19:39.822 { 00:19:39.822 "name": "pt1", 00:19:39.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.822 "is_configured": true, 00:19:39.822 "data_offset": 256, 00:19:39.822 "data_size": 7936 00:19:39.822 }, 00:19:39.822 { 00:19:39.822 "name": "pt2", 00:19:39.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.822 "is_configured": true, 00:19:39.822 "data_offset": 256, 00:19:39.822 "data_size": 7936 00:19:39.822 } 00:19:39.822 ] 00:19:39.822 }' 00:19:39.822 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.822 06:32:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:40.388 [2024-07-23 06:32:52.820085] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:40.388 "name": "raid_bdev1", 00:19:40.388 "aliases": [ 00:19:40.388 "606c7d7e-48bd-11ef-a06c-59ddad71024c" 00:19:40.388 ], 00:19:40.388 "product_name": "Raid Volume", 00:19:40.388 "block_size": 4096, 00:19:40.388 "num_blocks": 7936, 00:19:40.388 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:40.388 "assigned_rate_limits": { 00:19:40.388 "rw_ios_per_sec": 0, 00:19:40.388 "rw_mbytes_per_sec": 0, 00:19:40.388 "r_mbytes_per_sec": 0, 00:19:40.388 "w_mbytes_per_sec": 0 00:19:40.388 }, 00:19:40.388 "claimed": false, 00:19:40.388 "zoned": false, 00:19:40.388 "supported_io_types": { 00:19:40.388 "read": true, 00:19:40.388 "write": true, 00:19:40.388 "unmap": false, 00:19:40.388 "flush": false, 00:19:40.388 "reset": true, 00:19:40.388 "nvme_admin": false, 00:19:40.388 "nvme_io": false, 00:19:40.388 "nvme_io_md": false, 00:19:40.388 "write_zeroes": true, 00:19:40.388 "zcopy": false, 00:19:40.388 "get_zone_info": false, 00:19:40.388 "zone_management": false, 00:19:40.388 "zone_append": false, 00:19:40.388 "compare": false, 00:19:40.388 "compare_and_write": false, 00:19:40.388 "abort": false, 00:19:40.388 "seek_hole": false, 00:19:40.388 "seek_data": false, 00:19:40.388 "copy": false, 00:19:40.388 "nvme_iov_md": false 00:19:40.388 }, 00:19:40.388 "memory_domains": [ 00:19:40.388 { 00:19:40.388 "dma_device_id": "system", 00:19:40.388 "dma_device_type": 1 00:19:40.388 }, 00:19:40.388 { 00:19:40.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.388 "dma_device_type": 2 00:19:40.388 }, 00:19:40.388 { 00:19:40.388 "dma_device_id": "system", 00:19:40.388 "dma_device_type": 1 00:19:40.388 }, 00:19:40.388 { 00:19:40.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.388 "dma_device_type": 2 00:19:40.388 } 00:19:40.388 ], 00:19:40.388 "driver_specific": { 00:19:40.388 "raid": { 00:19:40.388 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:40.388 "strip_size_kb": 0, 00:19:40.388 "state": "online", 00:19:40.388 "raid_level": "raid1", 00:19:40.388 "superblock": true, 00:19:40.388 "num_base_bdevs": 2, 00:19:40.388 "num_base_bdevs_discovered": 2, 00:19:40.388 "num_base_bdevs_operational": 2, 00:19:40.388 "base_bdevs_list": [ 00:19:40.388 { 00:19:40.388 "name": "pt1", 00:19:40.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.388 "is_configured": true, 00:19:40.388 "data_offset": 256, 00:19:40.388 "data_size": 7936 00:19:40.388 }, 00:19:40.388 { 00:19:40.388 "name": "pt2", 00:19:40.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.388 "is_configured": true, 00:19:40.388 "data_offset": 256, 00:19:40.388 "data_size": 7936 00:19:40.388 } 00:19:40.388 ] 00:19:40.388 } 00:19:40.388 } 00:19:40.388 }' 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:40.388 pt2' 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:40.388 06:32:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:40.971 "name": "pt1", 00:19:40.971 "aliases": [ 00:19:40.971 "00000000-0000-0000-0000-000000000001" 00:19:40.971 ], 00:19:40.971 "product_name": "passthru", 00:19:40.971 "block_size": 4096, 00:19:40.971 "num_blocks": 8192, 00:19:40.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.971 "assigned_rate_limits": { 00:19:40.971 "rw_ios_per_sec": 0, 00:19:40.971 "rw_mbytes_per_sec": 0, 00:19:40.971 "r_mbytes_per_sec": 0, 00:19:40.971 "w_mbytes_per_sec": 0 00:19:40.971 }, 00:19:40.971 "claimed": true, 00:19:40.971 "claim_type": "exclusive_write", 00:19:40.971 "zoned": false, 00:19:40.971 "supported_io_types": { 00:19:40.971 "read": true, 00:19:40.971 "write": true, 00:19:40.971 "unmap": true, 00:19:40.971 "flush": true, 00:19:40.971 "reset": true, 00:19:40.971 "nvme_admin": false, 00:19:40.971 "nvme_io": false, 00:19:40.971 "nvme_io_md": false, 00:19:40.971 "write_zeroes": true, 00:19:40.971 "zcopy": true, 00:19:40.971 "get_zone_info": false, 00:19:40.971 "zone_management": false, 00:19:40.971 "zone_append": false, 00:19:40.971 "compare": false, 00:19:40.971 "compare_and_write": false, 00:19:40.971 "abort": true, 00:19:40.971 "seek_hole": false, 00:19:40.971 "seek_data": false, 00:19:40.971 "copy": true, 00:19:40.971 "nvme_iov_md": false 00:19:40.971 }, 00:19:40.971 "memory_domains": [ 00:19:40.971 { 00:19:40.971 "dma_device_id": "system", 00:19:40.971 "dma_device_type": 1 00:19:40.971 }, 00:19:40.971 { 00:19:40.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.971 "dma_device_type": 2 00:19:40.971 } 00:19:40.971 ], 00:19:40.971 "driver_specific": { 00:19:40.971 "passthru": { 00:19:40.971 "name": "pt1", 00:19:40.971 "base_bdev_name": "malloc1" 00:19:40.971 } 00:19:40.971 } 00:19:40.971 }' 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:40.971 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:41.231 "name": "pt2", 00:19:41.231 "aliases": [ 00:19:41.231 "00000000-0000-0000-0000-000000000002" 00:19:41.231 ], 00:19:41.231 "product_name": "passthru", 00:19:41.231 "block_size": 4096, 00:19:41.231 "num_blocks": 8192, 00:19:41.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.231 "assigned_rate_limits": { 00:19:41.231 "rw_ios_per_sec": 0, 00:19:41.231 "rw_mbytes_per_sec": 0, 00:19:41.231 "r_mbytes_per_sec": 0, 00:19:41.231 "w_mbytes_per_sec": 0 00:19:41.231 }, 00:19:41.231 "claimed": true, 00:19:41.231 "claim_type": "exclusive_write", 00:19:41.231 "zoned": false, 00:19:41.231 "supported_io_types": { 00:19:41.231 "read": true, 00:19:41.231 "write": true, 00:19:41.231 "unmap": true, 00:19:41.231 "flush": true, 00:19:41.231 "reset": true, 00:19:41.231 "nvme_admin": false, 00:19:41.231 "nvme_io": false, 00:19:41.231 "nvme_io_md": false, 00:19:41.231 "write_zeroes": true, 00:19:41.231 "zcopy": true, 00:19:41.231 "get_zone_info": false, 00:19:41.231 "zone_management": false, 00:19:41.231 "zone_append": false, 00:19:41.231 "compare": false, 00:19:41.231 "compare_and_write": false, 00:19:41.231 "abort": true, 00:19:41.231 "seek_hole": false, 00:19:41.231 "seek_data": false, 00:19:41.231 "copy": true, 00:19:41.231 "nvme_iov_md": false 00:19:41.231 }, 00:19:41.231 "memory_domains": [ 00:19:41.231 { 00:19:41.231 "dma_device_id": "system", 00:19:41.231 "dma_device_type": 1 00:19:41.231 }, 00:19:41.231 { 00:19:41.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.231 "dma_device_type": 2 00:19:41.231 } 00:19:41.231 ], 00:19:41.231 "driver_specific": { 00:19:41.231 "passthru": { 00:19:41.231 "name": "pt2", 00:19:41.231 "base_bdev_name": "malloc2" 00:19:41.231 } 00:19:41.231 } 00:19:41.231 }' 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:41.231 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:41.490 [2024-07-23 06:32:53.908102] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.490 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 606c7d7e-48bd-11ef-a06c-59ddad71024c '!=' 606c7d7e-48bd-11ef-a06c-59ddad71024c ']' 00:19:41.490 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:19:41.490 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:41.490 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:19:41.490 06:32:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:41.748 [2024-07-23 06:32:54.152086] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.748 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.006 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.006 "name": "raid_bdev1", 00:19:42.006 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:42.006 "strip_size_kb": 0, 00:19:42.006 "state": "online", 00:19:42.006 "raid_level": "raid1", 00:19:42.006 "superblock": true, 00:19:42.006 "num_base_bdevs": 2, 00:19:42.006 "num_base_bdevs_discovered": 1, 00:19:42.006 "num_base_bdevs_operational": 1, 00:19:42.006 "base_bdevs_list": [ 00:19:42.006 { 00:19:42.006 "name": null, 00:19:42.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.006 "is_configured": false, 00:19:42.006 "data_offset": 256, 00:19:42.006 "data_size": 7936 00:19:42.006 }, 00:19:42.006 { 00:19:42.006 "name": "pt2", 00:19:42.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.006 "is_configured": true, 00:19:42.006 "data_offset": 256, 00:19:42.006 "data_size": 7936 00:19:42.006 } 00:19:42.006 ] 00:19:42.006 }' 00:19:42.006 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.006 06:32:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.264 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:42.522 [2024-07-23 06:32:54.956151] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.522 [2024-07-23 06:32:54.956176] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.522 [2024-07-23 06:32:54.956254] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.522 [2024-07-23 06:32:54.956267] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.522 [2024-07-23 06:32:54.956272] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1804f8635180 name raid_bdev1, state offline 00:19:42.522 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.522 06:32:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:19:42.780 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:19:42.780 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:19:42.780 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:19:42.780 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:42.780 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:43.038 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:19:43.038 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:43.038 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:19:43.038 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:19:43.038 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:19:43.038 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:43.319 [2024-07-23 06:32:55.752176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:43.319 [2024-07-23 06:32:55.752236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.319 [2024-07-23 06:32:55.752263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1804f8634f00 00:19:43.319 [2024-07-23 06:32:55.752271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.319 [2024-07-23 06:32:55.752972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.319 [2024-07-23 06:32:55.752996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:43.319 [2024-07-23 06:32:55.753021] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:43.319 [2024-07-23 06:32:55.753033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:43.319 [2024-07-23 06:32:55.753058] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1804f8635180 00:19:43.319 [2024-07-23 06:32:55.753062] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:43.319 [2024-07-23 06:32:55.753083] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1804f8697e20 00:19:43.319 [2024-07-23 06:32:55.753131] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1804f8635180 00:19:43.319 [2024-07-23 06:32:55.753135] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1804f8635180 00:19:43.319 [2024-07-23 06:32:55.753157] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.319 pt2 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.319 06:32:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.591 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.591 "name": "raid_bdev1", 00:19:43.591 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:43.591 "strip_size_kb": 0, 00:19:43.591 "state": "online", 00:19:43.591 "raid_level": "raid1", 00:19:43.591 "superblock": true, 00:19:43.591 "num_base_bdevs": 2, 00:19:43.591 "num_base_bdevs_discovered": 1, 00:19:43.591 "num_base_bdevs_operational": 1, 00:19:43.591 "base_bdevs_list": [ 00:19:43.591 { 00:19:43.591 "name": null, 00:19:43.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.591 "is_configured": false, 00:19:43.591 "data_offset": 256, 00:19:43.591 "data_size": 7936 00:19:43.591 }, 00:19:43.591 { 00:19:43.591 "name": "pt2", 00:19:43.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.591 "is_configured": true, 00:19:43.591 "data_offset": 256, 00:19:43.591 "data_size": 7936 00:19:43.591 } 00:19:43.591 ] 00:19:43.591 }' 00:19:43.591 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.591 06:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.848 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:44.107 [2024-07-23 06:32:56.560304] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.107 [2024-07-23 06:32:56.560328] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.107 [2024-07-23 06:32:56.560368] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.107 [2024-07-23 06:32:56.560396] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.107 [2024-07-23 06:32:56.560401] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1804f8635180 name raid_bdev1, state offline 00:19:44.107 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.107 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:19:44.365 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:19:44.365 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:19:44.365 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:19:44.365 06:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:44.623 [2024-07-23 06:32:57.072408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:44.623 [2024-07-23 06:32:57.072472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.623 [2024-07-23 06:32:57.072501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1804f8634c80 00:19:44.623 [2024-07-23 06:32:57.072532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.623 [2024-07-23 06:32:57.073252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.623 [2024-07-23 06:32:57.073277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:44.623 [2024-07-23 06:32:57.073302] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:44.623 [2024-07-23 06:32:57.073314] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:44.623 [2024-07-23 06:32:57.073344] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:44.623 [2024-07-23 06:32:57.073356] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.623 [2024-07-23 06:32:57.073361] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1804f8634780 name raid_bdev1, state configuring 00:19:44.623 [2024-07-23 06:32:57.073370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.623 [2024-07-23 06:32:57.073386] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1804f8634780 00:19:44.623 [2024-07-23 06:32:57.073390] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:44.623 [2024-07-23 06:32:57.073410] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1804f8697e20 00:19:44.623 [2024-07-23 06:32:57.073470] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1804f8634780 00:19:44.623 [2024-07-23 06:32:57.073481] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1804f8634780 00:19:44.623 [2024-07-23 06:32:57.073511] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.623 pt1 00:19:44.623 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:19:44.623 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:44.623 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:44.623 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.624 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.882 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.882 "name": "raid_bdev1", 00:19:44.882 "uuid": "606c7d7e-48bd-11ef-a06c-59ddad71024c", 00:19:44.882 "strip_size_kb": 0, 00:19:44.882 "state": "online", 00:19:44.882 "raid_level": "raid1", 00:19:44.882 "superblock": true, 00:19:44.882 "num_base_bdevs": 2, 00:19:44.882 "num_base_bdevs_discovered": 1, 00:19:44.882 "num_base_bdevs_operational": 1, 00:19:44.882 "base_bdevs_list": [ 00:19:44.882 { 00:19:44.882 "name": null, 00:19:44.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.882 "is_configured": false, 00:19:44.882 "data_offset": 256, 00:19:44.882 "data_size": 7936 00:19:44.882 }, 00:19:44.882 { 00:19:44.882 "name": "pt2", 00:19:44.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:44.882 "is_configured": true, 00:19:44.882 "data_offset": 256, 00:19:44.882 "data_size": 7936 00:19:44.882 } 00:19:44.882 ] 00:19:44.882 }' 00:19:44.882 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.882 06:32:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.140 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:45.140 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:19:45.399 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:19:45.399 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:45.399 06:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:19:45.658 [2024-07-23 06:32:58.092518] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 606c7d7e-48bd-11ef-a06c-59ddad71024c '!=' 606c7d7e-48bd-11ef-a06c-59ddad71024c ']' 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65896 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65896 ']' 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65896 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65896 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:45.658 killing process with pid 65896 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65896' 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65896 00:19:45.658 [2024-07-23 06:32:58.121416] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.658 [2024-07-23 06:32:58.121439] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.658 [2024-07-23 06:32:58.121451] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.658 [2024-07-23 06:32:58.121455] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1804f8634780 name raid_bdev1, state offline 00:19:45.658 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65896 00:19:45.658 [2024-07-23 06:32:58.133973] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:45.930 06:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:19:45.930 00:19:45.930 real 0m13.059s 00:19:45.930 user 0m23.017s 00:19:45.930 sys 0m2.334s 00:19:45.930 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.930 06:32:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.930 ************************************ 00:19:45.930 END TEST raid_superblock_test_4k 00:19:45.930 ************************************ 00:19:45.930 06:32:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:45.930 06:32:58 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:19:45.930 06:32:58 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:19:45.930 06:32:58 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:45.930 06:32:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:45.930 06:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.930 06:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.930 ************************************ 00:19:45.930 START TEST raid_state_function_test_sb_md_separate 00:19:45.930 ************************************ 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66287 00:19:45.930 Process raid pid: 66287 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66287' 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66287 /var/tmp/spdk-raid.sock 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66287 ']' 00:19:45.930 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:45.931 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:45.931 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:45.931 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.931 06:32:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.931 [2024-07-23 06:32:58.368619] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:45.931 [2024-07-23 06:32:58.368780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:46.519 EAL: TSC is not safe to use in SMP mode 00:19:46.519 EAL: TSC is not invariant 00:19:46.519 [2024-07-23 06:32:58.925531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.519 [2024-07-23 06:32:59.015483] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:46.519 [2024-07-23 06:32:59.017762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.519 [2024-07-23 06:32:59.018666] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.519 [2024-07-23 06:32:59.018683] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.086 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.086 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:19:47.086 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:47.345 [2024-07-23 06:32:59.699632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.345 [2024-07-23 06:32:59.699712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.345 [2024-07-23 06:32:59.699717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.345 [2024-07-23 06:32:59.699741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.345 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.603 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.603 "name": "Existed_Raid", 00:19:47.603 "uuid": "67aa20a1-48bd-11ef-a06c-59ddad71024c", 00:19:47.603 "strip_size_kb": 0, 00:19:47.603 "state": "configuring", 00:19:47.603 "raid_level": "raid1", 00:19:47.603 "superblock": true, 00:19:47.603 "num_base_bdevs": 2, 00:19:47.603 "num_base_bdevs_discovered": 0, 00:19:47.603 "num_base_bdevs_operational": 2, 00:19:47.603 "base_bdevs_list": [ 00:19:47.603 { 00:19:47.603 "name": "BaseBdev1", 00:19:47.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.603 "is_configured": false, 00:19:47.603 "data_offset": 0, 00:19:47.603 "data_size": 0 00:19:47.603 }, 00:19:47.603 { 00:19:47.603 "name": "BaseBdev2", 00:19:47.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.603 "is_configured": false, 00:19:47.603 "data_offset": 0, 00:19:47.603 "data_size": 0 00:19:47.603 } 00:19:47.603 ] 00:19:47.603 }' 00:19:47.603 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.603 06:32:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.862 06:33:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:48.120 [2024-07-23 06:33:00.579657] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:48.120 [2024-07-23 06:33:00.579681] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xee9fe034500 name Existed_Raid, state configuring 00:19:48.120 06:33:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:48.378 [2024-07-23 06:33:00.799713] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:48.378 [2024-07-23 06:33:00.799768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:48.378 [2024-07-23 06:33:00.799773] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:48.378 [2024-07-23 06:33:00.799797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:48.378 06:33:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:48.636 [2024-07-23 06:33:01.024787] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:48.636 BaseBdev1 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:48.636 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:48.895 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:49.154 [ 00:19:49.154 { 00:19:49.154 "name": "BaseBdev1", 00:19:49.154 "aliases": [ 00:19:49.154 "68742ccd-48bd-11ef-a06c-59ddad71024c" 00:19:49.154 ], 00:19:49.154 "product_name": "Malloc disk", 00:19:49.154 "block_size": 4096, 00:19:49.154 "num_blocks": 8192, 00:19:49.154 "uuid": "68742ccd-48bd-11ef-a06c-59ddad71024c", 00:19:49.154 "md_size": 32, 00:19:49.154 "md_interleave": false, 00:19:49.154 "dif_type": 0, 00:19:49.154 "assigned_rate_limits": { 00:19:49.154 "rw_ios_per_sec": 0, 00:19:49.154 "rw_mbytes_per_sec": 0, 00:19:49.154 "r_mbytes_per_sec": 0, 00:19:49.154 "w_mbytes_per_sec": 0 00:19:49.154 }, 00:19:49.154 "claimed": true, 00:19:49.154 "claim_type": "exclusive_write", 00:19:49.154 "zoned": false, 00:19:49.154 "supported_io_types": { 00:19:49.154 "read": true, 00:19:49.154 "write": true, 00:19:49.154 "unmap": true, 00:19:49.154 "flush": true, 00:19:49.154 "reset": true, 00:19:49.154 "nvme_admin": false, 00:19:49.154 "nvme_io": false, 00:19:49.154 "nvme_io_md": false, 00:19:49.154 "write_zeroes": true, 00:19:49.154 "zcopy": true, 00:19:49.154 "get_zone_info": false, 00:19:49.154 "zone_management": false, 00:19:49.154 "zone_append": false, 00:19:49.154 "compare": false, 00:19:49.154 "compare_and_write": false, 00:19:49.154 "abort": true, 00:19:49.154 "seek_hole": false, 00:19:49.154 "seek_data": false, 00:19:49.154 "copy": true, 00:19:49.154 "nvme_iov_md": false 00:19:49.154 }, 00:19:49.154 "memory_domains": [ 00:19:49.154 { 00:19:49.154 "dma_device_id": "system", 00:19:49.154 "dma_device_type": 1 00:19:49.154 }, 00:19:49.154 { 00:19:49.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.154 "dma_device_type": 2 00:19:49.154 } 00:19:49.154 ], 00:19:49.154 "driver_specific": {} 00:19:49.154 } 00:19:49.154 ] 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.154 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.414 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.414 "name": "Existed_Raid", 00:19:49.414 "uuid": "6851fca5-48bd-11ef-a06c-59ddad71024c", 00:19:49.414 "strip_size_kb": 0, 00:19:49.414 "state": "configuring", 00:19:49.414 "raid_level": "raid1", 00:19:49.414 "superblock": true, 00:19:49.414 "num_base_bdevs": 2, 00:19:49.414 "num_base_bdevs_discovered": 1, 00:19:49.414 "num_base_bdevs_operational": 2, 00:19:49.414 "base_bdevs_list": [ 00:19:49.414 { 00:19:49.414 "name": "BaseBdev1", 00:19:49.414 "uuid": "68742ccd-48bd-11ef-a06c-59ddad71024c", 00:19:49.414 "is_configured": true, 00:19:49.414 "data_offset": 256, 00:19:49.414 "data_size": 7936 00:19:49.414 }, 00:19:49.414 { 00:19:49.414 "name": "BaseBdev2", 00:19:49.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.414 "is_configured": false, 00:19:49.414 "data_offset": 0, 00:19:49.414 "data_size": 0 00:19:49.414 } 00:19:49.414 ] 00:19:49.414 }' 00:19:49.414 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.414 06:33:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.980 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:49.980 [2024-07-23 06:33:02.491946] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:49.980 [2024-07-23 06:33:02.491977] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xee9fe034500 name Existed_Raid, state configuring 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:50.238 [2024-07-23 06:33:02.715975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:50.238 [2024-07-23 06:33:02.716866] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.238 [2024-07-23 06:33:02.716933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.238 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.496 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.496 "name": "Existed_Raid", 00:19:50.496 "uuid": "6976626c-48bd-11ef-a06c-59ddad71024c", 00:19:50.496 "strip_size_kb": 0, 00:19:50.496 "state": "configuring", 00:19:50.496 "raid_level": "raid1", 00:19:50.496 "superblock": true, 00:19:50.496 "num_base_bdevs": 2, 00:19:50.496 "num_base_bdevs_discovered": 1, 00:19:50.496 "num_base_bdevs_operational": 2, 00:19:50.496 "base_bdevs_list": [ 00:19:50.496 { 00:19:50.496 "name": "BaseBdev1", 00:19:50.496 "uuid": "68742ccd-48bd-11ef-a06c-59ddad71024c", 00:19:50.496 "is_configured": true, 00:19:50.496 "data_offset": 256, 00:19:50.496 "data_size": 7936 00:19:50.496 }, 00:19:50.496 { 00:19:50.496 "name": "BaseBdev2", 00:19:50.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.496 "is_configured": false, 00:19:50.496 "data_offset": 0, 00:19:50.496 "data_size": 0 00:19:50.496 } 00:19:50.496 ] 00:19:50.496 }' 00:19:50.496 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.496 06:33:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.754 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:51.011 [2024-07-23 06:33:03.524180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.011 [2024-07-23 06:33:03.524260] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xee9fe034a00 00:19:51.011 [2024-07-23 06:33:03.524266] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:51.011 [2024-07-23 06:33:03.524287] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xee9fe097e20 00:19:51.011 [2024-07-23 06:33:03.524315] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xee9fe034a00 00:19:51.011 [2024-07-23 06:33:03.524320] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xee9fe034a00 00:19:51.011 [2024-07-23 06:33:03.524334] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.011 BaseBdev2 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:51.269 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.527 06:33:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:51.527 [ 00:19:51.527 { 00:19:51.527 "name": "BaseBdev2", 00:19:51.527 "aliases": [ 00:19:51.527 "69f1b162-48bd-11ef-a06c-59ddad71024c" 00:19:51.527 ], 00:19:51.527 "product_name": "Malloc disk", 00:19:51.527 "block_size": 4096, 00:19:51.527 "num_blocks": 8192, 00:19:51.527 "uuid": "69f1b162-48bd-11ef-a06c-59ddad71024c", 00:19:51.527 "md_size": 32, 00:19:51.527 "md_interleave": false, 00:19:51.527 "dif_type": 0, 00:19:51.527 "assigned_rate_limits": { 00:19:51.527 "rw_ios_per_sec": 0, 00:19:51.527 "rw_mbytes_per_sec": 0, 00:19:51.527 "r_mbytes_per_sec": 0, 00:19:51.527 "w_mbytes_per_sec": 0 00:19:51.527 }, 00:19:51.527 "claimed": true, 00:19:51.527 "claim_type": "exclusive_write", 00:19:51.527 "zoned": false, 00:19:51.527 "supported_io_types": { 00:19:51.527 "read": true, 00:19:51.527 "write": true, 00:19:51.527 "unmap": true, 00:19:51.527 "flush": true, 00:19:51.527 "reset": true, 00:19:51.527 "nvme_admin": false, 00:19:51.527 "nvme_io": false, 00:19:51.527 "nvme_io_md": false, 00:19:51.527 "write_zeroes": true, 00:19:51.527 "zcopy": true, 00:19:51.527 "get_zone_info": false, 00:19:51.527 "zone_management": false, 00:19:51.527 "zone_append": false, 00:19:51.527 "compare": false, 00:19:51.527 "compare_and_write": false, 00:19:51.527 "abort": true, 00:19:51.527 "seek_hole": false, 00:19:51.527 "seek_data": false, 00:19:51.527 "copy": true, 00:19:51.527 "nvme_iov_md": false 00:19:51.527 }, 00:19:51.527 "memory_domains": [ 00:19:51.527 { 00:19:51.527 "dma_device_id": "system", 00:19:51.527 "dma_device_type": 1 00:19:51.527 }, 00:19:51.527 { 00:19:51.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.527 "dma_device_type": 2 00:19:51.527 } 00:19:51.527 ], 00:19:51.527 "driver_specific": {} 00:19:51.527 } 00:19:51.527 ] 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.786 "name": "Existed_Raid", 00:19:51.786 "uuid": "6976626c-48bd-11ef-a06c-59ddad71024c", 00:19:51.786 "strip_size_kb": 0, 00:19:51.786 "state": "online", 00:19:51.786 "raid_level": "raid1", 00:19:51.786 "superblock": true, 00:19:51.786 "num_base_bdevs": 2, 00:19:51.786 "num_base_bdevs_discovered": 2, 00:19:51.786 "num_base_bdevs_operational": 2, 00:19:51.786 "base_bdevs_list": [ 00:19:51.786 { 00:19:51.786 "name": "BaseBdev1", 00:19:51.786 "uuid": "68742ccd-48bd-11ef-a06c-59ddad71024c", 00:19:51.786 "is_configured": true, 00:19:51.786 "data_offset": 256, 00:19:51.786 "data_size": 7936 00:19:51.786 }, 00:19:51.786 { 00:19:51.786 "name": "BaseBdev2", 00:19:51.786 "uuid": "69f1b162-48bd-11ef-a06c-59ddad71024c", 00:19:51.786 "is_configured": true, 00:19:51.786 "data_offset": 256, 00:19:51.786 "data_size": 7936 00:19:51.786 } 00:19:51.786 ] 00:19:51.786 }' 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.786 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:52.351 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:52.351 [2024-07-23 06:33:04.860251] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.610 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:52.610 "name": "Existed_Raid", 00:19:52.610 "aliases": [ 00:19:52.610 "6976626c-48bd-11ef-a06c-59ddad71024c" 00:19:52.610 ], 00:19:52.610 "product_name": "Raid Volume", 00:19:52.610 "block_size": 4096, 00:19:52.610 "num_blocks": 7936, 00:19:52.610 "uuid": "6976626c-48bd-11ef-a06c-59ddad71024c", 00:19:52.610 "md_size": 32, 00:19:52.610 "md_interleave": false, 00:19:52.610 "dif_type": 0, 00:19:52.610 "assigned_rate_limits": { 00:19:52.610 "rw_ios_per_sec": 0, 00:19:52.610 "rw_mbytes_per_sec": 0, 00:19:52.610 "r_mbytes_per_sec": 0, 00:19:52.610 "w_mbytes_per_sec": 0 00:19:52.610 }, 00:19:52.610 "claimed": false, 00:19:52.610 "zoned": false, 00:19:52.610 "supported_io_types": { 00:19:52.610 "read": true, 00:19:52.610 "write": true, 00:19:52.610 "unmap": false, 00:19:52.610 "flush": false, 00:19:52.610 "reset": true, 00:19:52.610 "nvme_admin": false, 00:19:52.610 "nvme_io": false, 00:19:52.610 "nvme_io_md": false, 00:19:52.610 "write_zeroes": true, 00:19:52.610 "zcopy": false, 00:19:52.610 "get_zone_info": false, 00:19:52.610 "zone_management": false, 00:19:52.610 "zone_append": false, 00:19:52.610 "compare": false, 00:19:52.610 "compare_and_write": false, 00:19:52.610 "abort": false, 00:19:52.610 "seek_hole": false, 00:19:52.610 "seek_data": false, 00:19:52.610 "copy": false, 00:19:52.610 "nvme_iov_md": false 00:19:52.610 }, 00:19:52.610 "memory_domains": [ 00:19:52.610 { 00:19:52.610 "dma_device_id": "system", 00:19:52.610 "dma_device_type": 1 00:19:52.610 }, 00:19:52.610 { 00:19:52.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.610 "dma_device_type": 2 00:19:52.610 }, 00:19:52.610 { 00:19:52.610 "dma_device_id": "system", 00:19:52.610 "dma_device_type": 1 00:19:52.610 }, 00:19:52.610 { 00:19:52.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.610 "dma_device_type": 2 00:19:52.610 } 00:19:52.610 ], 00:19:52.610 "driver_specific": { 00:19:52.610 "raid": { 00:19:52.610 "uuid": "6976626c-48bd-11ef-a06c-59ddad71024c", 00:19:52.610 "strip_size_kb": 0, 00:19:52.610 "state": "online", 00:19:52.610 "raid_level": "raid1", 00:19:52.610 "superblock": true, 00:19:52.610 "num_base_bdevs": 2, 00:19:52.610 "num_base_bdevs_discovered": 2, 00:19:52.610 "num_base_bdevs_operational": 2, 00:19:52.610 "base_bdevs_list": [ 00:19:52.610 { 00:19:52.610 "name": "BaseBdev1", 00:19:52.610 "uuid": "68742ccd-48bd-11ef-a06c-59ddad71024c", 00:19:52.610 "is_configured": true, 00:19:52.610 "data_offset": 256, 00:19:52.610 "data_size": 7936 00:19:52.610 }, 00:19:52.610 { 00:19:52.610 "name": "BaseBdev2", 00:19:52.610 "uuid": "69f1b162-48bd-11ef-a06c-59ddad71024c", 00:19:52.610 "is_configured": true, 00:19:52.610 "data_offset": 256, 00:19:52.610 "data_size": 7936 00:19:52.610 } 00:19:52.610 ] 00:19:52.610 } 00:19:52.610 } 00:19:52.610 }' 00:19:52.610 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:52.610 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:52.610 BaseBdev2' 00:19:52.610 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:52.610 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:52.610 06:33:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:52.610 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:52.610 "name": "BaseBdev1", 00:19:52.610 "aliases": [ 00:19:52.610 "68742ccd-48bd-11ef-a06c-59ddad71024c" 00:19:52.610 ], 00:19:52.610 "product_name": "Malloc disk", 00:19:52.610 "block_size": 4096, 00:19:52.610 "num_blocks": 8192, 00:19:52.610 "uuid": "68742ccd-48bd-11ef-a06c-59ddad71024c", 00:19:52.610 "md_size": 32, 00:19:52.610 "md_interleave": false, 00:19:52.610 "dif_type": 0, 00:19:52.610 "assigned_rate_limits": { 00:19:52.610 "rw_ios_per_sec": 0, 00:19:52.610 "rw_mbytes_per_sec": 0, 00:19:52.610 "r_mbytes_per_sec": 0, 00:19:52.610 "w_mbytes_per_sec": 0 00:19:52.610 }, 00:19:52.610 "claimed": true, 00:19:52.610 "claim_type": "exclusive_write", 00:19:52.610 "zoned": false, 00:19:52.610 "supported_io_types": { 00:19:52.610 "read": true, 00:19:52.610 "write": true, 00:19:52.610 "unmap": true, 00:19:52.610 "flush": true, 00:19:52.610 "reset": true, 00:19:52.610 "nvme_admin": false, 00:19:52.610 "nvme_io": false, 00:19:52.610 "nvme_io_md": false, 00:19:52.610 "write_zeroes": true, 00:19:52.610 "zcopy": true, 00:19:52.610 "get_zone_info": false, 00:19:52.610 "zone_management": false, 00:19:52.610 "zone_append": false, 00:19:52.610 "compare": false, 00:19:52.610 "compare_and_write": false, 00:19:52.610 "abort": true, 00:19:52.610 "seek_hole": false, 00:19:52.610 "seek_data": false, 00:19:52.610 "copy": true, 00:19:52.610 "nvme_iov_md": false 00:19:52.610 }, 00:19:52.610 "memory_domains": [ 00:19:52.610 { 00:19:52.610 "dma_device_id": "system", 00:19:52.610 "dma_device_type": 1 00:19:52.610 }, 00:19:52.610 { 00:19:52.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.610 "dma_device_type": 2 00:19:52.610 } 00:19:52.610 ], 00:19:52.610 "driver_specific": {} 00:19:52.610 }' 00:19:52.610 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.610 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.610 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:52.610 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.610 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:52.869 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:53.129 "name": "BaseBdev2", 00:19:53.129 "aliases": [ 00:19:53.129 "69f1b162-48bd-11ef-a06c-59ddad71024c" 00:19:53.129 ], 00:19:53.129 "product_name": "Malloc disk", 00:19:53.129 "block_size": 4096, 00:19:53.129 "num_blocks": 8192, 00:19:53.129 "uuid": "69f1b162-48bd-11ef-a06c-59ddad71024c", 00:19:53.129 "md_size": 32, 00:19:53.129 "md_interleave": false, 00:19:53.129 "dif_type": 0, 00:19:53.129 "assigned_rate_limits": { 00:19:53.129 "rw_ios_per_sec": 0, 00:19:53.129 "rw_mbytes_per_sec": 0, 00:19:53.129 "r_mbytes_per_sec": 0, 00:19:53.129 "w_mbytes_per_sec": 0 00:19:53.129 }, 00:19:53.129 "claimed": true, 00:19:53.129 "claim_type": "exclusive_write", 00:19:53.129 "zoned": false, 00:19:53.129 "supported_io_types": { 00:19:53.129 "read": true, 00:19:53.129 "write": true, 00:19:53.129 "unmap": true, 00:19:53.129 "flush": true, 00:19:53.129 "reset": true, 00:19:53.129 "nvme_admin": false, 00:19:53.129 "nvme_io": false, 00:19:53.129 "nvme_io_md": false, 00:19:53.129 "write_zeroes": true, 00:19:53.129 "zcopy": true, 00:19:53.129 "get_zone_info": false, 00:19:53.129 "zone_management": false, 00:19:53.129 "zone_append": false, 00:19:53.129 "compare": false, 00:19:53.129 "compare_and_write": false, 00:19:53.129 "abort": true, 00:19:53.129 "seek_hole": false, 00:19:53.129 "seek_data": false, 00:19:53.129 "copy": true, 00:19:53.129 "nvme_iov_md": false 00:19:53.129 }, 00:19:53.129 "memory_domains": [ 00:19:53.129 { 00:19:53.129 "dma_device_id": "system", 00:19:53.129 "dma_device_type": 1 00:19:53.129 }, 00:19:53.129 { 00:19:53.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.129 "dma_device_type": 2 00:19:53.129 } 00:19:53.129 ], 00:19:53.129 "driver_specific": {} 00:19:53.129 }' 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:19:53.129 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:53.388 [2024-07-23 06:33:05.784292] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.388 06:33:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.646 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.646 "name": "Existed_Raid", 00:19:53.646 "uuid": "6976626c-48bd-11ef-a06c-59ddad71024c", 00:19:53.646 "strip_size_kb": 0, 00:19:53.646 "state": "online", 00:19:53.646 "raid_level": "raid1", 00:19:53.646 "superblock": true, 00:19:53.646 "num_base_bdevs": 2, 00:19:53.646 "num_base_bdevs_discovered": 1, 00:19:53.646 "num_base_bdevs_operational": 1, 00:19:53.646 "base_bdevs_list": [ 00:19:53.646 { 00:19:53.646 "name": null, 00:19:53.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.646 "is_configured": false, 00:19:53.646 "data_offset": 256, 00:19:53.646 "data_size": 7936 00:19:53.646 }, 00:19:53.646 { 00:19:53.646 "name": "BaseBdev2", 00:19:53.646 "uuid": "69f1b162-48bd-11ef-a06c-59ddad71024c", 00:19:53.646 "is_configured": true, 00:19:53.646 "data_offset": 256, 00:19:53.646 "data_size": 7936 00:19:53.646 } 00:19:53.646 ] 00:19:53.646 }' 00:19:53.646 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.646 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.904 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:53.904 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:53.904 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.904 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:54.163 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:54.163 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:54.163 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:54.421 [2024-07-23 06:33:06.906818] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:54.421 [2024-07-23 06:33:06.906887] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.421 [2024-07-23 06:33:06.913258] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.421 [2024-07-23 06:33:06.913274] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.421 [2024-07-23 06:33:06.913279] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xee9fe034a00 name Existed_Raid, state offline 00:19:54.421 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:54.421 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:54.421 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.421 06:33:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66287 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66287 ']' 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 66287 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66287 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:54.988 killing process with pid 66287 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66287' 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 66287 00:19:54.988 [2024-07-23 06:33:07.229357] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.988 [2024-07-23 06:33:07.229388] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 66287 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:19:54.988 00:19:54.988 real 0m9.055s 00:19:54.988 user 0m15.879s 00:19:54.988 sys 0m1.465s 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.988 06:33:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.988 ************************************ 00:19:54.988 END TEST raid_state_function_test_sb_md_separate 00:19:54.988 ************************************ 00:19:54.988 06:33:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:54.988 06:33:07 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:54.988 06:33:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:54.988 06:33:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.988 06:33:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.988 ************************************ 00:19:54.988 START TEST raid_superblock_test_md_separate 00:19:54.988 ************************************ 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66561 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66561 /var/tmp/spdk-raid.sock 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66561 ']' 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.988 06:33:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.988 [2024-07-23 06:33:07.472913] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:54.988 [2024-07-23 06:33:07.473191] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:55.555 EAL: TSC is not safe to use in SMP mode 00:19:55.555 EAL: TSC is not invariant 00:19:55.555 [2024-07-23 06:33:08.036016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.813 [2024-07-23 06:33:08.129970] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:55.813 [2024-07-23 06:33:08.132267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.813 [2024-07-23 06:33:08.133078] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.813 [2024-07-23 06:33:08.133092] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:56.071 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:56.330 malloc1 00:19:56.330 06:33:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:56.587 [2024-07-23 06:33:08.986083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:56.587 [2024-07-23 06:33:08.986134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.587 [2024-07-23 06:33:08.986187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5731834780 00:19:56.587 [2024-07-23 06:33:08.986196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.587 [2024-07-23 06:33:08.987019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.587 [2024-07-23 06:33:08.987043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:56.587 pt1 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:56.587 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:56.844 malloc2 00:19:56.844 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:57.103 [2024-07-23 06:33:09.470116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.103 [2024-07-23 06:33:09.470207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.103 [2024-07-23 06:33:09.470219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5731834c80 00:19:57.103 [2024-07-23 06:33:09.470228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.103 [2024-07-23 06:33:09.470833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.103 [2024-07-23 06:33:09.470853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.103 pt2 00:19:57.103 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:57.103 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:57.103 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:19:57.362 [2024-07-23 06:33:09.742161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:57.362 [2024-07-23 06:33:09.742818] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.362 [2024-07-23 06:33:09.742878] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b5731834f00 00:19:57.362 [2024-07-23 06:33:09.742884] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:57.362 [2024-07-23 06:33:09.742922] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b5731897e20 00:19:57.362 [2024-07-23 06:33:09.742980] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b5731834f00 00:19:57.362 [2024-07-23 06:33:09.742984] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b5731834f00 00:19:57.362 [2024-07-23 06:33:09.743016] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.362 06:33:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.620 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.620 "name": "raid_bdev1", 00:19:57.620 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:19:57.620 "strip_size_kb": 0, 00:19:57.620 "state": "online", 00:19:57.620 "raid_level": "raid1", 00:19:57.620 "superblock": true, 00:19:57.620 "num_base_bdevs": 2, 00:19:57.620 "num_base_bdevs_discovered": 2, 00:19:57.620 "num_base_bdevs_operational": 2, 00:19:57.620 "base_bdevs_list": [ 00:19:57.620 { 00:19:57.620 "name": "pt1", 00:19:57.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.620 "is_configured": true, 00:19:57.620 "data_offset": 256, 00:19:57.620 "data_size": 7936 00:19:57.620 }, 00:19:57.620 { 00:19:57.620 "name": "pt2", 00:19:57.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.620 "is_configured": true, 00:19:57.620 "data_offset": 256, 00:19:57.620 "data_size": 7936 00:19:57.620 } 00:19:57.620 ] 00:19:57.620 }' 00:19:57.620 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.620 06:33:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:57.878 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:58.136 [2024-07-23 06:33:10.582281] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.136 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:58.136 "name": "raid_bdev1", 00:19:58.136 "aliases": [ 00:19:58.136 "6da67e82-48bd-11ef-a06c-59ddad71024c" 00:19:58.136 ], 00:19:58.136 "product_name": "Raid Volume", 00:19:58.136 "block_size": 4096, 00:19:58.136 "num_blocks": 7936, 00:19:58.136 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:19:58.136 "md_size": 32, 00:19:58.136 "md_interleave": false, 00:19:58.136 "dif_type": 0, 00:19:58.136 "assigned_rate_limits": { 00:19:58.136 "rw_ios_per_sec": 0, 00:19:58.136 "rw_mbytes_per_sec": 0, 00:19:58.136 "r_mbytes_per_sec": 0, 00:19:58.136 "w_mbytes_per_sec": 0 00:19:58.136 }, 00:19:58.136 "claimed": false, 00:19:58.136 "zoned": false, 00:19:58.136 "supported_io_types": { 00:19:58.136 "read": true, 00:19:58.136 "write": true, 00:19:58.136 "unmap": false, 00:19:58.136 "flush": false, 00:19:58.136 "reset": true, 00:19:58.136 "nvme_admin": false, 00:19:58.136 "nvme_io": false, 00:19:58.136 "nvme_io_md": false, 00:19:58.136 "write_zeroes": true, 00:19:58.136 "zcopy": false, 00:19:58.136 "get_zone_info": false, 00:19:58.136 "zone_management": false, 00:19:58.136 "zone_append": false, 00:19:58.136 "compare": false, 00:19:58.136 "compare_and_write": false, 00:19:58.136 "abort": false, 00:19:58.136 "seek_hole": false, 00:19:58.136 "seek_data": false, 00:19:58.136 "copy": false, 00:19:58.136 "nvme_iov_md": false 00:19:58.136 }, 00:19:58.136 "memory_domains": [ 00:19:58.136 { 00:19:58.136 "dma_device_id": "system", 00:19:58.136 "dma_device_type": 1 00:19:58.136 }, 00:19:58.136 { 00:19:58.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.136 "dma_device_type": 2 00:19:58.136 }, 00:19:58.136 { 00:19:58.136 "dma_device_id": "system", 00:19:58.136 "dma_device_type": 1 00:19:58.136 }, 00:19:58.136 { 00:19:58.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.136 "dma_device_type": 2 00:19:58.136 } 00:19:58.136 ], 00:19:58.136 "driver_specific": { 00:19:58.136 "raid": { 00:19:58.136 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:19:58.136 "strip_size_kb": 0, 00:19:58.136 "state": "online", 00:19:58.136 "raid_level": "raid1", 00:19:58.136 "superblock": true, 00:19:58.136 "num_base_bdevs": 2, 00:19:58.136 "num_base_bdevs_discovered": 2, 00:19:58.136 "num_base_bdevs_operational": 2, 00:19:58.136 "base_bdevs_list": [ 00:19:58.136 { 00:19:58.136 "name": "pt1", 00:19:58.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.136 "is_configured": true, 00:19:58.136 "data_offset": 256, 00:19:58.136 "data_size": 7936 00:19:58.136 }, 00:19:58.136 { 00:19:58.136 "name": "pt2", 00:19:58.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.136 "is_configured": true, 00:19:58.136 "data_offset": 256, 00:19:58.136 "data_size": 7936 00:19:58.136 } 00:19:58.136 ] 00:19:58.136 } 00:19:58.136 } 00:19:58.136 }' 00:19:58.136 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:58.136 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:58.136 pt2' 00:19:58.136 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:58.136 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:58.136 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:58.395 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:58.395 "name": "pt1", 00:19:58.395 "aliases": [ 00:19:58.395 "00000000-0000-0000-0000-000000000001" 00:19:58.395 ], 00:19:58.395 "product_name": "passthru", 00:19:58.395 "block_size": 4096, 00:19:58.395 "num_blocks": 8192, 00:19:58.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.395 "md_size": 32, 00:19:58.395 "md_interleave": false, 00:19:58.395 "dif_type": 0, 00:19:58.395 "assigned_rate_limits": { 00:19:58.395 "rw_ios_per_sec": 0, 00:19:58.395 "rw_mbytes_per_sec": 0, 00:19:58.395 "r_mbytes_per_sec": 0, 00:19:58.395 "w_mbytes_per_sec": 0 00:19:58.395 }, 00:19:58.395 "claimed": true, 00:19:58.395 "claim_type": "exclusive_write", 00:19:58.395 "zoned": false, 00:19:58.395 "supported_io_types": { 00:19:58.395 "read": true, 00:19:58.395 "write": true, 00:19:58.395 "unmap": true, 00:19:58.395 "flush": true, 00:19:58.395 "reset": true, 00:19:58.395 "nvme_admin": false, 00:19:58.395 "nvme_io": false, 00:19:58.395 "nvme_io_md": false, 00:19:58.395 "write_zeroes": true, 00:19:58.395 "zcopy": true, 00:19:58.395 "get_zone_info": false, 00:19:58.395 "zone_management": false, 00:19:58.395 "zone_append": false, 00:19:58.395 "compare": false, 00:19:58.395 "compare_and_write": false, 00:19:58.395 "abort": true, 00:19:58.395 "seek_hole": false, 00:19:58.395 "seek_data": false, 00:19:58.395 "copy": true, 00:19:58.395 "nvme_iov_md": false 00:19:58.395 }, 00:19:58.395 "memory_domains": [ 00:19:58.395 { 00:19:58.395 "dma_device_id": "system", 00:19:58.395 "dma_device_type": 1 00:19:58.395 }, 00:19:58.395 { 00:19:58.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.395 "dma_device_type": 2 00:19:58.395 } 00:19:58.395 ], 00:19:58.395 "driver_specific": { 00:19:58.395 "passthru": { 00:19:58.395 "name": "pt1", 00:19:58.395 "base_bdev_name": "malloc1" 00:19:58.395 } 00:19:58.395 } 00:19:58.395 }' 00:19:58.395 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.395 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.395 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:58.395 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.395 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:58.654 06:33:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:58.912 "name": "pt2", 00:19:58.912 "aliases": [ 00:19:58.912 "00000000-0000-0000-0000-000000000002" 00:19:58.912 ], 00:19:58.912 "product_name": "passthru", 00:19:58.912 "block_size": 4096, 00:19:58.912 "num_blocks": 8192, 00:19:58.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.912 "md_size": 32, 00:19:58.912 "md_interleave": false, 00:19:58.912 "dif_type": 0, 00:19:58.912 "assigned_rate_limits": { 00:19:58.912 "rw_ios_per_sec": 0, 00:19:58.912 "rw_mbytes_per_sec": 0, 00:19:58.912 "r_mbytes_per_sec": 0, 00:19:58.912 "w_mbytes_per_sec": 0 00:19:58.912 }, 00:19:58.912 "claimed": true, 00:19:58.912 "claim_type": "exclusive_write", 00:19:58.912 "zoned": false, 00:19:58.912 "supported_io_types": { 00:19:58.912 "read": true, 00:19:58.912 "write": true, 00:19:58.912 "unmap": true, 00:19:58.912 "flush": true, 00:19:58.912 "reset": true, 00:19:58.912 "nvme_admin": false, 00:19:58.912 "nvme_io": false, 00:19:58.912 "nvme_io_md": false, 00:19:58.912 "write_zeroes": true, 00:19:58.912 "zcopy": true, 00:19:58.912 "get_zone_info": false, 00:19:58.912 "zone_management": false, 00:19:58.912 "zone_append": false, 00:19:58.912 "compare": false, 00:19:58.912 "compare_and_write": false, 00:19:58.912 "abort": true, 00:19:58.912 "seek_hole": false, 00:19:58.912 "seek_data": false, 00:19:58.912 "copy": true, 00:19:58.912 "nvme_iov_md": false 00:19:58.912 }, 00:19:58.912 "memory_domains": [ 00:19:58.912 { 00:19:58.912 "dma_device_id": "system", 00:19:58.912 "dma_device_type": 1 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.912 "dma_device_type": 2 00:19:58.912 } 00:19:58.912 ], 00:19:58.912 "driver_specific": { 00:19:58.912 "passthru": { 00:19:58.912 "name": "pt2", 00:19:58.912 "base_bdev_name": "malloc2" 00:19:58.912 } 00:19:58.912 } 00:19:58.912 }' 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:58.912 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:59.169 [2024-07-23 06:33:11.566323] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.169 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6da67e82-48bd-11ef-a06c-59ddad71024c 00:19:59.170 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 6da67e82-48bd-11ef-a06c-59ddad71024c ']' 00:19:59.170 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:59.427 [2024-07-23 06:33:11.798293] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.427 [2024-07-23 06:33:11.798313] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.427 [2024-07-23 06:33:11.798350] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.427 [2024-07-23 06:33:11.798364] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.427 [2024-07-23 06:33:11.798368] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b5731834f00 name raid_bdev1, state offline 00:19:59.427 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:59.427 06:33:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.692 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:59.692 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:59.692 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.692 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:59.964 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.964 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:00.223 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:00.223 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:00.481 06:33:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:00.740 [2024-07-23 06:33:13.074337] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:00.740 [2024-07-23 06:33:13.074909] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:00.740 [2024-07-23 06:33:13.074934] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:00.740 [2024-07-23 06:33:13.074970] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:00.740 [2024-07-23 06:33:13.074980] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.740 [2024-07-23 06:33:13.074985] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b5731834c80 name raid_bdev1, state configuring 00:20:00.740 request: 00:20:00.740 { 00:20:00.740 "name": "raid_bdev1", 00:20:00.740 "raid_level": "raid1", 00:20:00.740 "base_bdevs": [ 00:20:00.740 "malloc1", 00:20:00.740 "malloc2" 00:20:00.740 ], 00:20:00.740 "superblock": false, 00:20:00.740 "method": "bdev_raid_create", 00:20:00.740 "req_id": 1 00:20:00.740 } 00:20:00.740 Got JSON-RPC error response 00:20:00.740 response: 00:20:00.740 { 00:20:00.740 "code": -17, 00:20:00.740 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:00.740 } 00:20:00.740 06:33:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:20:00.740 06:33:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.740 06:33:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.740 06:33:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.740 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.740 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:00.998 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:00.999 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:00.999 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:01.257 [2024-07-23 06:33:13.594346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:01.257 [2024-07-23 06:33:13.594409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.257 [2024-07-23 06:33:13.594421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5731834780 00:20:01.257 [2024-07-23 06:33:13.594429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.257 [2024-07-23 06:33:13.595172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.257 [2024-07-23 06:33:13.595196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:01.257 [2024-07-23 06:33:13.595220] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:01.257 [2024-07-23 06:33:13.595231] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.257 pt1 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.257 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.515 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.515 "name": "raid_bdev1", 00:20:01.515 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:01.515 "strip_size_kb": 0, 00:20:01.515 "state": "configuring", 00:20:01.515 "raid_level": "raid1", 00:20:01.515 "superblock": true, 00:20:01.515 "num_base_bdevs": 2, 00:20:01.515 "num_base_bdevs_discovered": 1, 00:20:01.515 "num_base_bdevs_operational": 2, 00:20:01.515 "base_bdevs_list": [ 00:20:01.515 { 00:20:01.515 "name": "pt1", 00:20:01.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.516 "is_configured": true, 00:20:01.516 "data_offset": 256, 00:20:01.516 "data_size": 7936 00:20:01.516 }, 00:20:01.516 { 00:20:01.516 "name": null, 00:20:01.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.516 "is_configured": false, 00:20:01.516 "data_offset": 256, 00:20:01.516 "data_size": 7936 00:20:01.516 } 00:20:01.516 ] 00:20:01.516 }' 00:20:01.516 06:33:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.516 06:33:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.773 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:20:01.773 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:01.773 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:01.773 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.032 [2024-07-23 06:33:14.442370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.032 [2024-07-23 06:33:14.442426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.032 [2024-07-23 06:33:14.442437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5731834f00 00:20:02.032 [2024-07-23 06:33:14.442445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.032 [2024-07-23 06:33:14.442512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.032 [2024-07-23 06:33:14.442521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.032 [2024-07-23 06:33:14.442544] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:02.032 [2024-07-23 06:33:14.442557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.032 [2024-07-23 06:33:14.442582] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b5731835180 00:20:02.032 [2024-07-23 06:33:14.442586] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:02.032 [2024-07-23 06:33:14.442605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b5731897e20 00:20:02.032 [2024-07-23 06:33:14.442627] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b5731835180 00:20:02.032 [2024-07-23 06:33:14.442630] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b5731835180 00:20:02.032 [2024-07-23 06:33:14.442645] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.032 pt2 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.032 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.290 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.290 "name": "raid_bdev1", 00:20:02.290 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:02.290 "strip_size_kb": 0, 00:20:02.290 "state": "online", 00:20:02.290 "raid_level": "raid1", 00:20:02.290 "superblock": true, 00:20:02.290 "num_base_bdevs": 2, 00:20:02.290 "num_base_bdevs_discovered": 2, 00:20:02.290 "num_base_bdevs_operational": 2, 00:20:02.290 "base_bdevs_list": [ 00:20:02.290 { 00:20:02.290 "name": "pt1", 00:20:02.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.290 "is_configured": true, 00:20:02.290 "data_offset": 256, 00:20:02.290 "data_size": 7936 00:20:02.290 }, 00:20:02.290 { 00:20:02.290 "name": "pt2", 00:20:02.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.290 "is_configured": true, 00:20:02.290 "data_offset": 256, 00:20:02.290 "data_size": 7936 00:20:02.290 } 00:20:02.290 ] 00:20:02.290 }' 00:20:02.290 06:33:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.290 06:33:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:02.547 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:03.114 [2024-07-23 06:33:15.338431] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.114 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:03.114 "name": "raid_bdev1", 00:20:03.114 "aliases": [ 00:20:03.114 "6da67e82-48bd-11ef-a06c-59ddad71024c" 00:20:03.114 ], 00:20:03.114 "product_name": "Raid Volume", 00:20:03.114 "block_size": 4096, 00:20:03.114 "num_blocks": 7936, 00:20:03.114 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:03.114 "md_size": 32, 00:20:03.114 "md_interleave": false, 00:20:03.114 "dif_type": 0, 00:20:03.114 "assigned_rate_limits": { 00:20:03.114 "rw_ios_per_sec": 0, 00:20:03.114 "rw_mbytes_per_sec": 0, 00:20:03.114 "r_mbytes_per_sec": 0, 00:20:03.114 "w_mbytes_per_sec": 0 00:20:03.114 }, 00:20:03.114 "claimed": false, 00:20:03.114 "zoned": false, 00:20:03.114 "supported_io_types": { 00:20:03.114 "read": true, 00:20:03.114 "write": true, 00:20:03.114 "unmap": false, 00:20:03.114 "flush": false, 00:20:03.114 "reset": true, 00:20:03.114 "nvme_admin": false, 00:20:03.114 "nvme_io": false, 00:20:03.114 "nvme_io_md": false, 00:20:03.114 "write_zeroes": true, 00:20:03.114 "zcopy": false, 00:20:03.114 "get_zone_info": false, 00:20:03.114 "zone_management": false, 00:20:03.114 "zone_append": false, 00:20:03.114 "compare": false, 00:20:03.114 "compare_and_write": false, 00:20:03.114 "abort": false, 00:20:03.114 "seek_hole": false, 00:20:03.114 "seek_data": false, 00:20:03.114 "copy": false, 00:20:03.114 "nvme_iov_md": false 00:20:03.114 }, 00:20:03.114 "memory_domains": [ 00:20:03.114 { 00:20:03.114 "dma_device_id": "system", 00:20:03.114 "dma_device_type": 1 00:20:03.114 }, 00:20:03.114 { 00:20:03.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.114 "dma_device_type": 2 00:20:03.114 }, 00:20:03.114 { 00:20:03.114 "dma_device_id": "system", 00:20:03.114 "dma_device_type": 1 00:20:03.114 }, 00:20:03.114 { 00:20:03.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.114 "dma_device_type": 2 00:20:03.114 } 00:20:03.114 ], 00:20:03.114 "driver_specific": { 00:20:03.114 "raid": { 00:20:03.114 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:03.114 "strip_size_kb": 0, 00:20:03.114 "state": "online", 00:20:03.114 "raid_level": "raid1", 00:20:03.114 "superblock": true, 00:20:03.114 "num_base_bdevs": 2, 00:20:03.114 "num_base_bdevs_discovered": 2, 00:20:03.114 "num_base_bdevs_operational": 2, 00:20:03.114 "base_bdevs_list": [ 00:20:03.114 { 00:20:03.114 "name": "pt1", 00:20:03.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.114 "is_configured": true, 00:20:03.114 "data_offset": 256, 00:20:03.114 "data_size": 7936 00:20:03.114 }, 00:20:03.114 { 00:20:03.114 "name": "pt2", 00:20:03.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.114 "is_configured": true, 00:20:03.114 "data_offset": 256, 00:20:03.114 "data_size": 7936 00:20:03.114 } 00:20:03.114 ] 00:20:03.114 } 00:20:03.114 } 00:20:03.114 }' 00:20:03.114 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.114 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:03.114 pt2' 00:20:03.114 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.114 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:03.114 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.372 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.372 "name": "pt1", 00:20:03.372 "aliases": [ 00:20:03.372 "00000000-0000-0000-0000-000000000001" 00:20:03.372 ], 00:20:03.373 "product_name": "passthru", 00:20:03.373 "block_size": 4096, 00:20:03.373 "num_blocks": 8192, 00:20:03.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.373 "md_size": 32, 00:20:03.373 "md_interleave": false, 00:20:03.373 "dif_type": 0, 00:20:03.373 "assigned_rate_limits": { 00:20:03.373 "rw_ios_per_sec": 0, 00:20:03.373 "rw_mbytes_per_sec": 0, 00:20:03.373 "r_mbytes_per_sec": 0, 00:20:03.373 "w_mbytes_per_sec": 0 00:20:03.373 }, 00:20:03.373 "claimed": true, 00:20:03.373 "claim_type": "exclusive_write", 00:20:03.373 "zoned": false, 00:20:03.373 "supported_io_types": { 00:20:03.373 "read": true, 00:20:03.373 "write": true, 00:20:03.373 "unmap": true, 00:20:03.373 "flush": true, 00:20:03.373 "reset": true, 00:20:03.373 "nvme_admin": false, 00:20:03.373 "nvme_io": false, 00:20:03.373 "nvme_io_md": false, 00:20:03.373 "write_zeroes": true, 00:20:03.373 "zcopy": true, 00:20:03.373 "get_zone_info": false, 00:20:03.373 "zone_management": false, 00:20:03.373 "zone_append": false, 00:20:03.373 "compare": false, 00:20:03.373 "compare_and_write": false, 00:20:03.373 "abort": true, 00:20:03.373 "seek_hole": false, 00:20:03.373 "seek_data": false, 00:20:03.373 "copy": true, 00:20:03.373 "nvme_iov_md": false 00:20:03.373 }, 00:20:03.373 "memory_domains": [ 00:20:03.373 { 00:20:03.373 "dma_device_id": "system", 00:20:03.373 "dma_device_type": 1 00:20:03.373 }, 00:20:03.373 { 00:20:03.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.373 "dma_device_type": 2 00:20:03.373 } 00:20:03.373 ], 00:20:03.373 "driver_specific": { 00:20:03.373 "passthru": { 00:20:03.373 "name": "pt1", 00:20:03.373 "base_bdev_name": "malloc1" 00:20:03.373 } 00:20:03.373 } 00:20:03.373 }' 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.373 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:03.632 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.632 "name": "pt2", 00:20:03.632 "aliases": [ 00:20:03.632 "00000000-0000-0000-0000-000000000002" 00:20:03.632 ], 00:20:03.632 "product_name": "passthru", 00:20:03.632 "block_size": 4096, 00:20:03.632 "num_blocks": 8192, 00:20:03.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.632 "md_size": 32, 00:20:03.632 "md_interleave": false, 00:20:03.632 "dif_type": 0, 00:20:03.632 "assigned_rate_limits": { 00:20:03.632 "rw_ios_per_sec": 0, 00:20:03.632 "rw_mbytes_per_sec": 0, 00:20:03.632 "r_mbytes_per_sec": 0, 00:20:03.632 "w_mbytes_per_sec": 0 00:20:03.632 }, 00:20:03.632 "claimed": true, 00:20:03.632 "claim_type": "exclusive_write", 00:20:03.632 "zoned": false, 00:20:03.632 "supported_io_types": { 00:20:03.632 "read": true, 00:20:03.632 "write": true, 00:20:03.632 "unmap": true, 00:20:03.632 "flush": true, 00:20:03.632 "reset": true, 00:20:03.632 "nvme_admin": false, 00:20:03.632 "nvme_io": false, 00:20:03.632 "nvme_io_md": false, 00:20:03.632 "write_zeroes": true, 00:20:03.632 "zcopy": true, 00:20:03.632 "get_zone_info": false, 00:20:03.632 "zone_management": false, 00:20:03.632 "zone_append": false, 00:20:03.632 "compare": false, 00:20:03.632 "compare_and_write": false, 00:20:03.632 "abort": true, 00:20:03.632 "seek_hole": false, 00:20:03.632 "seek_data": false, 00:20:03.632 "copy": true, 00:20:03.632 "nvme_iov_md": false 00:20:03.632 }, 00:20:03.632 "memory_domains": [ 00:20:03.632 { 00:20:03.632 "dma_device_id": "system", 00:20:03.632 "dma_device_type": 1 00:20:03.632 }, 00:20:03.632 { 00:20:03.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.632 "dma_device_type": 2 00:20:03.632 } 00:20:03.632 ], 00:20:03.632 "driver_specific": { 00:20:03.632 "passthru": { 00:20:03.632 "name": "pt2", 00:20:03.632 "base_bdev_name": "malloc2" 00:20:03.632 } 00:20:03.632 } 00:20:03.632 }' 00:20:03.632 06:33:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.632 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:03.891 [2024-07-23 06:33:16.322455] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.891 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 6da67e82-48bd-11ef-a06c-59ddad71024c '!=' 6da67e82-48bd-11ef-a06c-59ddad71024c ']' 00:20:03.891 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:03.891 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:03.891 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:20:03.891 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:04.148 [2024-07-23 06:33:16.618443] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.149 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.406 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.406 "name": "raid_bdev1", 00:20:04.406 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:04.406 "strip_size_kb": 0, 00:20:04.406 "state": "online", 00:20:04.406 "raid_level": "raid1", 00:20:04.406 "superblock": true, 00:20:04.406 "num_base_bdevs": 2, 00:20:04.406 "num_base_bdevs_discovered": 1, 00:20:04.406 "num_base_bdevs_operational": 1, 00:20:04.406 "base_bdevs_list": [ 00:20:04.406 { 00:20:04.406 "name": null, 00:20:04.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.406 "is_configured": false, 00:20:04.406 "data_offset": 256, 00:20:04.406 "data_size": 7936 00:20:04.406 }, 00:20:04.406 { 00:20:04.406 "name": "pt2", 00:20:04.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.406 "is_configured": true, 00:20:04.406 "data_offset": 256, 00:20:04.406 "data_size": 7936 00:20:04.406 } 00:20:04.406 ] 00:20:04.406 }' 00:20:04.406 06:33:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.406 06:33:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.974 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.233 [2024-07-23 06:33:17.574437] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.233 [2024-07-23 06:33:17.574459] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.233 [2024-07-23 06:33:17.574486] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.233 [2024-07-23 06:33:17.574499] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.233 [2024-07-23 06:33:17.574503] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b5731835180 name raid_bdev1, state offline 00:20:05.233 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:05.233 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.491 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:05.491 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:05.491 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:05.491 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:05.491 06:33:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:05.750 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:05.750 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:05.750 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:05.750 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:05.750 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:20:05.750 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.008 [2024-07-23 06:33:18.386462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.008 [2024-07-23 06:33:18.386530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.008 [2024-07-23 06:33:18.386573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5731834f00 00:20:06.008 [2024-07-23 06:33:18.386595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.008 [2024-07-23 06:33:18.387257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.008 [2024-07-23 06:33:18.387286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.008 [2024-07-23 06:33:18.387310] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:06.008 [2024-07-23 06:33:18.387322] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.008 [2024-07-23 06:33:18.387336] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b5731835180 00:20:06.008 [2024-07-23 06:33:18.387340] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:06.008 [2024-07-23 06:33:18.387360] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b5731897e20 00:20:06.008 [2024-07-23 06:33:18.387383] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b5731835180 00:20:06.008 [2024-07-23 06:33:18.387387] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b5731835180 00:20:06.008 [2024-07-23 06:33:18.387401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.008 pt2 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.008 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.266 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.266 "name": "raid_bdev1", 00:20:06.266 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:06.266 "strip_size_kb": 0, 00:20:06.266 "state": "online", 00:20:06.266 "raid_level": "raid1", 00:20:06.266 "superblock": true, 00:20:06.266 "num_base_bdevs": 2, 00:20:06.266 "num_base_bdevs_discovered": 1, 00:20:06.266 "num_base_bdevs_operational": 1, 00:20:06.266 "base_bdevs_list": [ 00:20:06.266 { 00:20:06.266 "name": null, 00:20:06.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.266 "is_configured": false, 00:20:06.266 "data_offset": 256, 00:20:06.266 "data_size": 7936 00:20:06.266 }, 00:20:06.266 { 00:20:06.266 "name": "pt2", 00:20:06.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.266 "is_configured": true, 00:20:06.266 "data_offset": 256, 00:20:06.266 "data_size": 7936 00:20:06.266 } 00:20:06.266 ] 00:20:06.266 }' 00:20:06.266 06:33:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.266 06:33:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.833 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:06.833 [2024-07-23 06:33:19.278475] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.833 [2024-07-23 06:33:19.278499] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.833 [2024-07-23 06:33:19.278521] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.833 [2024-07-23 06:33:19.278533] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.833 [2024-07-23 06:33:19.278537] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b5731835180 name raid_bdev1, state offline 00:20:06.833 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.833 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:07.092 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:07.092 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:07.092 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:20:07.092 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.350 [2024-07-23 06:33:19.842498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.350 [2024-07-23 06:33:19.842550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.350 [2024-07-23 06:33:19.842578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5731834c80 00:20:07.350 [2024-07-23 06:33:19.842585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.350 [2024-07-23 06:33:19.843256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.350 [2024-07-23 06:33:19.843280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.350 [2024-07-23 06:33:19.843304] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:07.350 [2024-07-23 06:33:19.843315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.350 [2024-07-23 06:33:19.843333] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:07.350 [2024-07-23 06:33:19.843337] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.350 [2024-07-23 06:33:19.843344] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b5731834780 name raid_bdev1, state configuring 00:20:07.350 [2024-07-23 06:33:19.843351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.350 [2024-07-23 06:33:19.843370] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b5731834780 00:20:07.350 [2024-07-23 06:33:19.843374] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:07.350 [2024-07-23 06:33:19.843397] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b5731897e20 00:20:07.350 [2024-07-23 06:33:19.843420] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b5731834780 00:20:07.350 [2024-07-23 06:33:19.843424] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b5731834780 00:20:07.350 [2024-07-23 06:33:19.843437] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.350 pt1 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.350 06:33:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.608 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.608 "name": "raid_bdev1", 00:20:07.609 "uuid": "6da67e82-48bd-11ef-a06c-59ddad71024c", 00:20:07.609 "strip_size_kb": 0, 00:20:07.609 "state": "online", 00:20:07.609 "raid_level": "raid1", 00:20:07.609 "superblock": true, 00:20:07.609 "num_base_bdevs": 2, 00:20:07.609 "num_base_bdevs_discovered": 1, 00:20:07.609 "num_base_bdevs_operational": 1, 00:20:07.609 "base_bdevs_list": [ 00:20:07.609 { 00:20:07.609 "name": null, 00:20:07.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.609 "is_configured": false, 00:20:07.609 "data_offset": 256, 00:20:07.609 "data_size": 7936 00:20:07.609 }, 00:20:07.609 { 00:20:07.609 "name": "pt2", 00:20:07.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.609 "is_configured": true, 00:20:07.609 "data_offset": 256, 00:20:07.609 "data_size": 7936 00:20:07.609 } 00:20:07.609 ] 00:20:07.609 }' 00:20:07.609 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.609 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.176 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:08.176 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:08.176 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:08.434 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:08.434 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:08.434 [2024-07-23 06:33:20.954595] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 6da67e82-48bd-11ef-a06c-59ddad71024c '!=' 6da67e82-48bd-11ef-a06c-59ddad71024c ']' 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66561 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66561 ']' 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 66561 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66561 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:20:08.692 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:20:08.693 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:20:08.693 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66561' 00:20:08.693 killing process with pid 66561 00:20:08.693 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 66561 00:20:08.693 [2024-07-23 06:33:20.984959] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.693 [2024-07-23 06:33:20.984981] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.693 [2024-07-23 06:33:20.984993] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.693 [2024-07-23 06:33:20.984998] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b5731834780 name raid_bdev1, state offline 00:20:08.693 06:33:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 66561 00:20:08.693 [2024-07-23 06:33:20.997676] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.693 06:33:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:20:08.693 00:20:08.693 real 0m13.723s 00:20:08.693 user 0m24.465s 00:20:08.693 sys 0m2.208s 00:20:08.693 06:33:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.693 06:33:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.693 ************************************ 00:20:08.693 END TEST raid_superblock_test_md_separate 00:20:08.693 ************************************ 00:20:08.951 06:33:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:08.951 06:33:21 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:20:08.951 06:33:21 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:20:08.951 06:33:21 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:08.951 06:33:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:08.951 06:33:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.951 06:33:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.951 ************************************ 00:20:08.951 START TEST raid_state_function_test_sb_md_interleaved 00:20:08.951 ************************************ 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66952 00:20:08.951 Process raid pid: 66952 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66952' 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66952 /var/tmp/spdk-raid.sock 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66952 ']' 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:08.951 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:08.952 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:08.952 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.952 06:33:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.952 [2024-07-23 06:33:21.249010] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:08.952 [2024-07-23 06:33:21.249176] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:09.518 EAL: TSC is not safe to use in SMP mode 00:20:09.518 EAL: TSC is not invariant 00:20:09.518 [2024-07-23 06:33:21.829946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.518 [2024-07-23 06:33:21.923307] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:09.518 [2024-07-23 06:33:21.925627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.518 [2024-07-23 06:33:21.926496] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.518 [2024-07-23 06:33:21.926510] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.776 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.776 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:20:09.776 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:10.343 [2024-07-23 06:33:22.600426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.343 [2024-07-23 06:33:22.600477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.343 [2024-07-23 06:33:22.600493] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.343 [2024-07-23 06:33:22.600501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.343 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.601 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.601 "name": "Existed_Raid", 00:20:10.601 "uuid": "755082eb-48bd-11ef-a06c-59ddad71024c", 00:20:10.601 "strip_size_kb": 0, 00:20:10.601 "state": "configuring", 00:20:10.601 "raid_level": "raid1", 00:20:10.601 "superblock": true, 00:20:10.601 "num_base_bdevs": 2, 00:20:10.601 "num_base_bdevs_discovered": 0, 00:20:10.601 "num_base_bdevs_operational": 2, 00:20:10.601 "base_bdevs_list": [ 00:20:10.601 { 00:20:10.601 "name": "BaseBdev1", 00:20:10.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.601 "is_configured": false, 00:20:10.601 "data_offset": 0, 00:20:10.601 "data_size": 0 00:20:10.601 }, 00:20:10.601 { 00:20:10.601 "name": "BaseBdev2", 00:20:10.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.601 "is_configured": false, 00:20:10.601 "data_offset": 0, 00:20:10.601 "data_size": 0 00:20:10.601 } 00:20:10.601 ] 00:20:10.601 }' 00:20:10.601 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.601 06:33:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.860 06:33:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:11.118 [2024-07-23 06:33:23.488501] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:11.118 [2024-07-23 06:33:23.488523] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3aca11e34500 name Existed_Raid, state configuring 00:20:11.118 06:33:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:11.376 [2024-07-23 06:33:23.744511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.376 [2024-07-23 06:33:23.744567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.376 [2024-07-23 06:33:23.744586] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.376 [2024-07-23 06:33:23.744594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.376 06:33:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:11.634 [2024-07-23 06:33:24.001346] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.634 BaseBdev1 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:11.634 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.893 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:12.152 [ 00:20:12.152 { 00:20:12.152 "name": "BaseBdev1", 00:20:12.152 "aliases": [ 00:20:12.152 "7626263e-48bd-11ef-a06c-59ddad71024c" 00:20:12.152 ], 00:20:12.152 "product_name": "Malloc disk", 00:20:12.152 "block_size": 4128, 00:20:12.152 "num_blocks": 8192, 00:20:12.152 "uuid": "7626263e-48bd-11ef-a06c-59ddad71024c", 00:20:12.152 "md_size": 32, 00:20:12.152 "md_interleave": true, 00:20:12.152 "dif_type": 0, 00:20:12.152 "assigned_rate_limits": { 00:20:12.152 "rw_ios_per_sec": 0, 00:20:12.152 "rw_mbytes_per_sec": 0, 00:20:12.152 "r_mbytes_per_sec": 0, 00:20:12.152 "w_mbytes_per_sec": 0 00:20:12.152 }, 00:20:12.152 "claimed": true, 00:20:12.152 "claim_type": "exclusive_write", 00:20:12.152 "zoned": false, 00:20:12.152 "supported_io_types": { 00:20:12.152 "read": true, 00:20:12.152 "write": true, 00:20:12.152 "unmap": true, 00:20:12.152 "flush": true, 00:20:12.152 "reset": true, 00:20:12.152 "nvme_admin": false, 00:20:12.152 "nvme_io": false, 00:20:12.152 "nvme_io_md": false, 00:20:12.152 "write_zeroes": true, 00:20:12.152 "zcopy": true, 00:20:12.152 "get_zone_info": false, 00:20:12.152 "zone_management": false, 00:20:12.152 "zone_append": false, 00:20:12.152 "compare": false, 00:20:12.152 "compare_and_write": false, 00:20:12.152 "abort": true, 00:20:12.152 "seek_hole": false, 00:20:12.152 "seek_data": false, 00:20:12.152 "copy": true, 00:20:12.152 "nvme_iov_md": false 00:20:12.152 }, 00:20:12.152 "memory_domains": [ 00:20:12.152 { 00:20:12.152 "dma_device_id": "system", 00:20:12.152 "dma_device_type": 1 00:20:12.152 }, 00:20:12.152 { 00:20:12.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.152 "dma_device_type": 2 00:20:12.152 } 00:20:12.152 ], 00:20:12.152 "driver_specific": {} 00:20:12.152 } 00:20:12.152 ] 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.152 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.411 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.411 "name": "Existed_Raid", 00:20:12.411 "uuid": "75ff15d5-48bd-11ef-a06c-59ddad71024c", 00:20:12.411 "strip_size_kb": 0, 00:20:12.411 "state": "configuring", 00:20:12.411 "raid_level": "raid1", 00:20:12.411 "superblock": true, 00:20:12.411 "num_base_bdevs": 2, 00:20:12.411 "num_base_bdevs_discovered": 1, 00:20:12.411 "num_base_bdevs_operational": 2, 00:20:12.411 "base_bdevs_list": [ 00:20:12.411 { 00:20:12.411 "name": "BaseBdev1", 00:20:12.411 "uuid": "7626263e-48bd-11ef-a06c-59ddad71024c", 00:20:12.411 "is_configured": true, 00:20:12.411 "data_offset": 256, 00:20:12.411 "data_size": 7936 00:20:12.411 }, 00:20:12.411 { 00:20:12.411 "name": "BaseBdev2", 00:20:12.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.411 "is_configured": false, 00:20:12.411 "data_offset": 0, 00:20:12.411 "data_size": 0 00:20:12.411 } 00:20:12.411 ] 00:20:12.411 }' 00:20:12.411 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.411 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.669 06:33:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:12.927 [2024-07-23 06:33:25.240576] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:12.927 [2024-07-23 06:33:25.240616] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3aca11e34500 name Existed_Raid, state configuring 00:20:12.927 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:13.185 [2024-07-23 06:33:25.572607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.185 [2024-07-23 06:33:25.573561] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.185 [2024-07-23 06:33:25.573615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.185 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.443 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.443 "name": "Existed_Raid", 00:20:13.443 "uuid": "771607a9-48bd-11ef-a06c-59ddad71024c", 00:20:13.443 "strip_size_kb": 0, 00:20:13.443 "state": "configuring", 00:20:13.443 "raid_level": "raid1", 00:20:13.443 "superblock": true, 00:20:13.443 "num_base_bdevs": 2, 00:20:13.443 "num_base_bdevs_discovered": 1, 00:20:13.443 "num_base_bdevs_operational": 2, 00:20:13.443 "base_bdevs_list": [ 00:20:13.443 { 00:20:13.443 "name": "BaseBdev1", 00:20:13.443 "uuid": "7626263e-48bd-11ef-a06c-59ddad71024c", 00:20:13.443 "is_configured": true, 00:20:13.443 "data_offset": 256, 00:20:13.443 "data_size": 7936 00:20:13.443 }, 00:20:13.443 { 00:20:13.443 "name": "BaseBdev2", 00:20:13.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.443 "is_configured": false, 00:20:13.443 "data_offset": 0, 00:20:13.443 "data_size": 0 00:20:13.443 } 00:20:13.443 ] 00:20:13.443 }' 00:20:13.443 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.443 06:33:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.702 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:13.960 [2024-07-23 06:33:26.380734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.960 [2024-07-23 06:33:26.380801] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3aca11e34a00 00:20:13.960 [2024-07-23 06:33:26.380806] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:13.960 [2024-07-23 06:33:26.380824] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3aca11e97e20 00:20:13.960 [2024-07-23 06:33:26.380837] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3aca11e34a00 00:20:13.960 [2024-07-23 06:33:26.380840] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3aca11e34a00 00:20:13.960 [2024-07-23 06:33:26.380850] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.960 BaseBdev2 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:13.960 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.219 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:14.477 [ 00:20:14.477 { 00:20:14.477 "name": "BaseBdev2", 00:20:14.477 "aliases": [ 00:20:14.477 "7791550f-48bd-11ef-a06c-59ddad71024c" 00:20:14.477 ], 00:20:14.477 "product_name": "Malloc disk", 00:20:14.477 "block_size": 4128, 00:20:14.477 "num_blocks": 8192, 00:20:14.477 "uuid": "7791550f-48bd-11ef-a06c-59ddad71024c", 00:20:14.477 "md_size": 32, 00:20:14.477 "md_interleave": true, 00:20:14.477 "dif_type": 0, 00:20:14.477 "assigned_rate_limits": { 00:20:14.477 "rw_ios_per_sec": 0, 00:20:14.477 "rw_mbytes_per_sec": 0, 00:20:14.477 "r_mbytes_per_sec": 0, 00:20:14.477 "w_mbytes_per_sec": 0 00:20:14.477 }, 00:20:14.477 "claimed": true, 00:20:14.477 "claim_type": "exclusive_write", 00:20:14.477 "zoned": false, 00:20:14.477 "supported_io_types": { 00:20:14.477 "read": true, 00:20:14.477 "write": true, 00:20:14.477 "unmap": true, 00:20:14.477 "flush": true, 00:20:14.477 "reset": true, 00:20:14.477 "nvme_admin": false, 00:20:14.477 "nvme_io": false, 00:20:14.477 "nvme_io_md": false, 00:20:14.477 "write_zeroes": true, 00:20:14.477 "zcopy": true, 00:20:14.477 "get_zone_info": false, 00:20:14.477 "zone_management": false, 00:20:14.477 "zone_append": false, 00:20:14.477 "compare": false, 00:20:14.477 "compare_and_write": false, 00:20:14.477 "abort": true, 00:20:14.477 "seek_hole": false, 00:20:14.477 "seek_data": false, 00:20:14.477 "copy": true, 00:20:14.477 "nvme_iov_md": false 00:20:14.477 }, 00:20:14.477 "memory_domains": [ 00:20:14.477 { 00:20:14.477 "dma_device_id": "system", 00:20:14.477 "dma_device_type": 1 00:20:14.477 }, 00:20:14.477 { 00:20:14.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.477 "dma_device_type": 2 00:20:14.477 } 00:20:14.477 ], 00:20:14.477 "driver_specific": {} 00:20:14.477 } 00:20:14.477 ] 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:14.477 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.478 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.478 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.478 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.478 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.478 06:33:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.736 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.736 "name": "Existed_Raid", 00:20:14.736 "uuid": "771607a9-48bd-11ef-a06c-59ddad71024c", 00:20:14.736 "strip_size_kb": 0, 00:20:14.736 "state": "online", 00:20:14.736 "raid_level": "raid1", 00:20:14.736 "superblock": true, 00:20:14.736 "num_base_bdevs": 2, 00:20:14.736 "num_base_bdevs_discovered": 2, 00:20:14.736 "num_base_bdevs_operational": 2, 00:20:14.736 "base_bdevs_list": [ 00:20:14.736 { 00:20:14.736 "name": "BaseBdev1", 00:20:14.736 "uuid": "7626263e-48bd-11ef-a06c-59ddad71024c", 00:20:14.736 "is_configured": true, 00:20:14.736 "data_offset": 256, 00:20:14.736 "data_size": 7936 00:20:14.736 }, 00:20:14.736 { 00:20:14.736 "name": "BaseBdev2", 00:20:14.736 "uuid": "7791550f-48bd-11ef-a06c-59ddad71024c", 00:20:14.736 "is_configured": true, 00:20:14.736 "data_offset": 256, 00:20:14.736 "data_size": 7936 00:20:14.736 } 00:20:14.736 ] 00:20:14.736 }' 00:20:14.736 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.736 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:14.994 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:15.561 [2024-07-23 06:33:27.788816] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.561 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:15.561 "name": "Existed_Raid", 00:20:15.561 "aliases": [ 00:20:15.561 "771607a9-48bd-11ef-a06c-59ddad71024c" 00:20:15.561 ], 00:20:15.561 "product_name": "Raid Volume", 00:20:15.561 "block_size": 4128, 00:20:15.561 "num_blocks": 7936, 00:20:15.561 "uuid": "771607a9-48bd-11ef-a06c-59ddad71024c", 00:20:15.561 "md_size": 32, 00:20:15.561 "md_interleave": true, 00:20:15.561 "dif_type": 0, 00:20:15.561 "assigned_rate_limits": { 00:20:15.561 "rw_ios_per_sec": 0, 00:20:15.561 "rw_mbytes_per_sec": 0, 00:20:15.561 "r_mbytes_per_sec": 0, 00:20:15.561 "w_mbytes_per_sec": 0 00:20:15.561 }, 00:20:15.561 "claimed": false, 00:20:15.561 "zoned": false, 00:20:15.561 "supported_io_types": { 00:20:15.561 "read": true, 00:20:15.561 "write": true, 00:20:15.561 "unmap": false, 00:20:15.561 "flush": false, 00:20:15.561 "reset": true, 00:20:15.561 "nvme_admin": false, 00:20:15.561 "nvme_io": false, 00:20:15.561 "nvme_io_md": false, 00:20:15.561 "write_zeroes": true, 00:20:15.561 "zcopy": false, 00:20:15.561 "get_zone_info": false, 00:20:15.561 "zone_management": false, 00:20:15.561 "zone_append": false, 00:20:15.561 "compare": false, 00:20:15.561 "compare_and_write": false, 00:20:15.561 "abort": false, 00:20:15.561 "seek_hole": false, 00:20:15.561 "seek_data": false, 00:20:15.561 "copy": false, 00:20:15.561 "nvme_iov_md": false 00:20:15.561 }, 00:20:15.561 "memory_domains": [ 00:20:15.561 { 00:20:15.561 "dma_device_id": "system", 00:20:15.562 "dma_device_type": 1 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.562 "dma_device_type": 2 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "dma_device_id": "system", 00:20:15.562 "dma_device_type": 1 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.562 "dma_device_type": 2 00:20:15.562 } 00:20:15.562 ], 00:20:15.562 "driver_specific": { 00:20:15.562 "raid": { 00:20:15.562 "uuid": "771607a9-48bd-11ef-a06c-59ddad71024c", 00:20:15.562 "strip_size_kb": 0, 00:20:15.562 "state": "online", 00:20:15.562 "raid_level": "raid1", 00:20:15.562 "superblock": true, 00:20:15.562 "num_base_bdevs": 2, 00:20:15.562 "num_base_bdevs_discovered": 2, 00:20:15.562 "num_base_bdevs_operational": 2, 00:20:15.562 "base_bdevs_list": [ 00:20:15.562 { 00:20:15.562 "name": "BaseBdev1", 00:20:15.562 "uuid": "7626263e-48bd-11ef-a06c-59ddad71024c", 00:20:15.562 "is_configured": true, 00:20:15.562 "data_offset": 256, 00:20:15.562 "data_size": 7936 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "name": "BaseBdev2", 00:20:15.562 "uuid": "7791550f-48bd-11ef-a06c-59ddad71024c", 00:20:15.562 "is_configured": true, 00:20:15.562 "data_offset": 256, 00:20:15.562 "data_size": 7936 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 } 00:20:15.562 } 00:20:15.562 }' 00:20:15.562 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.562 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:15.562 BaseBdev2' 00:20:15.562 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.562 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:15.562 06:33:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:15.562 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:15.562 "name": "BaseBdev1", 00:20:15.562 "aliases": [ 00:20:15.562 "7626263e-48bd-11ef-a06c-59ddad71024c" 00:20:15.562 ], 00:20:15.562 "product_name": "Malloc disk", 00:20:15.562 "block_size": 4128, 00:20:15.562 "num_blocks": 8192, 00:20:15.562 "uuid": "7626263e-48bd-11ef-a06c-59ddad71024c", 00:20:15.562 "md_size": 32, 00:20:15.562 "md_interleave": true, 00:20:15.562 "dif_type": 0, 00:20:15.562 "assigned_rate_limits": { 00:20:15.562 "rw_ios_per_sec": 0, 00:20:15.562 "rw_mbytes_per_sec": 0, 00:20:15.562 "r_mbytes_per_sec": 0, 00:20:15.562 "w_mbytes_per_sec": 0 00:20:15.562 }, 00:20:15.562 "claimed": true, 00:20:15.562 "claim_type": "exclusive_write", 00:20:15.562 "zoned": false, 00:20:15.562 "supported_io_types": { 00:20:15.562 "read": true, 00:20:15.562 "write": true, 00:20:15.562 "unmap": true, 00:20:15.562 "flush": true, 00:20:15.562 "reset": true, 00:20:15.562 "nvme_admin": false, 00:20:15.562 "nvme_io": false, 00:20:15.562 "nvme_io_md": false, 00:20:15.562 "write_zeroes": true, 00:20:15.562 "zcopy": true, 00:20:15.562 "get_zone_info": false, 00:20:15.562 "zone_management": false, 00:20:15.562 "zone_append": false, 00:20:15.562 "compare": false, 00:20:15.562 "compare_and_write": false, 00:20:15.562 "abort": true, 00:20:15.562 "seek_hole": false, 00:20:15.562 "seek_data": false, 00:20:15.562 "copy": true, 00:20:15.562 "nvme_iov_md": false 00:20:15.562 }, 00:20:15.562 "memory_domains": [ 00:20:15.562 { 00:20:15.562 "dma_device_id": "system", 00:20:15.562 "dma_device_type": 1 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.562 "dma_device_type": 2 00:20:15.562 } 00:20:15.562 ], 00:20:15.562 "driver_specific": {} 00:20:15.562 }' 00:20:15.562 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.562 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.562 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:20:15.562 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.562 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:15.821 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:16.079 "name": "BaseBdev2", 00:20:16.079 "aliases": [ 00:20:16.079 "7791550f-48bd-11ef-a06c-59ddad71024c" 00:20:16.079 ], 00:20:16.079 "product_name": "Malloc disk", 00:20:16.079 "block_size": 4128, 00:20:16.079 "num_blocks": 8192, 00:20:16.079 "uuid": "7791550f-48bd-11ef-a06c-59ddad71024c", 00:20:16.079 "md_size": 32, 00:20:16.079 "md_interleave": true, 00:20:16.079 "dif_type": 0, 00:20:16.079 "assigned_rate_limits": { 00:20:16.079 "rw_ios_per_sec": 0, 00:20:16.079 "rw_mbytes_per_sec": 0, 00:20:16.079 "r_mbytes_per_sec": 0, 00:20:16.079 "w_mbytes_per_sec": 0 00:20:16.079 }, 00:20:16.079 "claimed": true, 00:20:16.079 "claim_type": "exclusive_write", 00:20:16.079 "zoned": false, 00:20:16.079 "supported_io_types": { 00:20:16.079 "read": true, 00:20:16.079 "write": true, 00:20:16.079 "unmap": true, 00:20:16.079 "flush": true, 00:20:16.079 "reset": true, 00:20:16.079 "nvme_admin": false, 00:20:16.079 "nvme_io": false, 00:20:16.079 "nvme_io_md": false, 00:20:16.079 "write_zeroes": true, 00:20:16.079 "zcopy": true, 00:20:16.079 "get_zone_info": false, 00:20:16.079 "zone_management": false, 00:20:16.079 "zone_append": false, 00:20:16.079 "compare": false, 00:20:16.079 "compare_and_write": false, 00:20:16.079 "abort": true, 00:20:16.079 "seek_hole": false, 00:20:16.079 "seek_data": false, 00:20:16.079 "copy": true, 00:20:16.079 "nvme_iov_md": false 00:20:16.079 }, 00:20:16.079 "memory_domains": [ 00:20:16.079 { 00:20:16.079 "dma_device_id": "system", 00:20:16.079 "dma_device_type": 1 00:20:16.079 }, 00:20:16.079 { 00:20:16.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.079 "dma_device_type": 2 00:20:16.079 } 00:20:16.079 ], 00:20:16.079 "driver_specific": {} 00:20:16.079 }' 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:16.079 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:16.337 [2024-07-23 06:33:28.740974] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.337 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.338 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.338 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.338 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.338 06:33:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.596 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.596 "name": "Existed_Raid", 00:20:16.596 "uuid": "771607a9-48bd-11ef-a06c-59ddad71024c", 00:20:16.596 "strip_size_kb": 0, 00:20:16.596 "state": "online", 00:20:16.596 "raid_level": "raid1", 00:20:16.596 "superblock": true, 00:20:16.596 "num_base_bdevs": 2, 00:20:16.596 "num_base_bdevs_discovered": 1, 00:20:16.596 "num_base_bdevs_operational": 1, 00:20:16.596 "base_bdevs_list": [ 00:20:16.596 { 00:20:16.596 "name": null, 00:20:16.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.596 "is_configured": false, 00:20:16.596 "data_offset": 256, 00:20:16.596 "data_size": 7936 00:20:16.596 }, 00:20:16.596 { 00:20:16.596 "name": "BaseBdev2", 00:20:16.596 "uuid": "7791550f-48bd-11ef-a06c-59ddad71024c", 00:20:16.596 "is_configured": true, 00:20:16.596 "data_offset": 256, 00:20:16.596 "data_size": 7936 00:20:16.596 } 00:20:16.596 ] 00:20:16.596 }' 00:20:16.596 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.596 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.854 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:16.854 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:16.854 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.854 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:17.113 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:17.113 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:17.113 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:17.371 [2024-07-23 06:33:29.855966] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:17.372 [2024-07-23 06:33:29.856038] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.372 [2024-07-23 06:33:29.862511] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.372 [2024-07-23 06:33:29.862529] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.372 [2024-07-23 06:33:29.862534] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3aca11e34a00 name Existed_Raid, state offline 00:20:17.372 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:17.372 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:17.372 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.372 06:33:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66952 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66952 ']' 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66952 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:17.630 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66952 00:20:17.631 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:20:17.631 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:20:17.631 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:20:17.631 killing process with pid 66952 00:20:17.631 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66952' 00:20:17.631 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66952 00:20:17.631 [2024-07-23 06:33:30.125403] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:17.631 [2024-07-23 06:33:30.125436] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:17.631 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66952 00:20:17.890 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:20:17.890 00:20:17.890 real 0m9.076s 00:20:17.890 user 0m15.719s 00:20:17.890 sys 0m1.668s 00:20:17.890 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.890 06:33:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.890 ************************************ 00:20:17.890 END TEST raid_state_function_test_sb_md_interleaved 00:20:17.890 ************************************ 00:20:17.890 06:33:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:17.890 06:33:30 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:17.890 06:33:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:17.890 06:33:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.890 06:33:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:17.890 ************************************ 00:20:17.890 START TEST raid_superblock_test_md_interleaved 00:20:17.890 ************************************ 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=67222 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 67222 /var/tmp/spdk-raid.sock 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67222 ']' 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.890 06:33:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.890 [2024-07-23 06:33:30.367910] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:17.890 [2024-07-23 06:33:30.368176] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:18.458 EAL: TSC is not safe to use in SMP mode 00:20:18.458 EAL: TSC is not invariant 00:20:18.458 [2024-07-23 06:33:30.932513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.716 [2024-07-23 06:33:31.016444] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:18.716 [2024-07-23 06:33:31.018549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.716 [2024-07-23 06:33:31.019289] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:18.716 [2024-07-23 06:33:31.019303] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:18.975 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:19.234 malloc1 00:20:19.234 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:19.492 [2024-07-23 06:33:31.958889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:19.492 [2024-07-23 06:33:31.958960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.492 [2024-07-23 06:33:31.958984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98687434780 00:20:19.492 [2024-07-23 06:33:31.958999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.492 [2024-07-23 06:33:31.959884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.492 [2024-07-23 06:33:31.959910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:19.492 pt1 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:19.492 06:33:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:19.751 malloc2 00:20:19.751 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:20.010 [2024-07-23 06:33:32.478938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:20.010 [2024-07-23 06:33:32.479017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.010 [2024-07-23 06:33:32.479029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98687434c80 00:20:20.010 [2024-07-23 06:33:32.479038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.010 [2024-07-23 06:33:32.479682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.010 [2024-07-23 06:33:32.479709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:20.010 pt2 00:20:20.010 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:20.010 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:20.010 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:20:20.268 [2024-07-23 06:33:32.722996] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:20.268 [2024-07-23 06:33:32.723622] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:20.268 [2024-07-23 06:33:32.723687] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x98687434f00 00:20:20.268 [2024-07-23 06:33:32.723694] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:20.268 [2024-07-23 06:33:32.723736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98687497e20 00:20:20.268 [2024-07-23 06:33:32.723753] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x98687434f00 00:20:20.268 [2024-07-23 06:33:32.723757] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x98687434f00 00:20:20.268 [2024-07-23 06:33:32.723771] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.268 06:33:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.527 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.527 "name": "raid_bdev1", 00:20:20.527 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:20.527 "strip_size_kb": 0, 00:20:20.527 "state": "online", 00:20:20.527 "raid_level": "raid1", 00:20:20.527 "superblock": true, 00:20:20.527 "num_base_bdevs": 2, 00:20:20.527 "num_base_bdevs_discovered": 2, 00:20:20.527 "num_base_bdevs_operational": 2, 00:20:20.527 "base_bdevs_list": [ 00:20:20.527 { 00:20:20.527 "name": "pt1", 00:20:20.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:20.527 "is_configured": true, 00:20:20.527 "data_offset": 256, 00:20:20.527 "data_size": 7936 00:20:20.527 }, 00:20:20.527 { 00:20:20.527 "name": "pt2", 00:20:20.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:20.527 "is_configured": true, 00:20:20.527 "data_offset": 256, 00:20:20.527 "data_size": 7936 00:20:20.527 } 00:20:20.527 ] 00:20:20.527 }' 00:20:20.527 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.527 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:21.095 [2024-07-23 06:33:33.567112] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:21.095 "name": "raid_bdev1", 00:20:21.095 "aliases": [ 00:20:21.095 "7b591736-48bd-11ef-a06c-59ddad71024c" 00:20:21.095 ], 00:20:21.095 "product_name": "Raid Volume", 00:20:21.095 "block_size": 4128, 00:20:21.095 "num_blocks": 7936, 00:20:21.095 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:21.095 "md_size": 32, 00:20:21.095 "md_interleave": true, 00:20:21.095 "dif_type": 0, 00:20:21.095 "assigned_rate_limits": { 00:20:21.095 "rw_ios_per_sec": 0, 00:20:21.095 "rw_mbytes_per_sec": 0, 00:20:21.095 "r_mbytes_per_sec": 0, 00:20:21.095 "w_mbytes_per_sec": 0 00:20:21.095 }, 00:20:21.095 "claimed": false, 00:20:21.095 "zoned": false, 00:20:21.095 "supported_io_types": { 00:20:21.095 "read": true, 00:20:21.095 "write": true, 00:20:21.095 "unmap": false, 00:20:21.095 "flush": false, 00:20:21.095 "reset": true, 00:20:21.095 "nvme_admin": false, 00:20:21.095 "nvme_io": false, 00:20:21.095 "nvme_io_md": false, 00:20:21.095 "write_zeroes": true, 00:20:21.095 "zcopy": false, 00:20:21.095 "get_zone_info": false, 00:20:21.095 "zone_management": false, 00:20:21.095 "zone_append": false, 00:20:21.095 "compare": false, 00:20:21.095 "compare_and_write": false, 00:20:21.095 "abort": false, 00:20:21.095 "seek_hole": false, 00:20:21.095 "seek_data": false, 00:20:21.095 "copy": false, 00:20:21.095 "nvme_iov_md": false 00:20:21.095 }, 00:20:21.095 "memory_domains": [ 00:20:21.095 { 00:20:21.095 "dma_device_id": "system", 00:20:21.095 "dma_device_type": 1 00:20:21.095 }, 00:20:21.095 { 00:20:21.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.095 "dma_device_type": 2 00:20:21.095 }, 00:20:21.095 { 00:20:21.095 "dma_device_id": "system", 00:20:21.095 "dma_device_type": 1 00:20:21.095 }, 00:20:21.095 { 00:20:21.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.095 "dma_device_type": 2 00:20:21.095 } 00:20:21.095 ], 00:20:21.095 "driver_specific": { 00:20:21.095 "raid": { 00:20:21.095 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:21.095 "strip_size_kb": 0, 00:20:21.095 "state": "online", 00:20:21.095 "raid_level": "raid1", 00:20:21.095 "superblock": true, 00:20:21.095 "num_base_bdevs": 2, 00:20:21.095 "num_base_bdevs_discovered": 2, 00:20:21.095 "num_base_bdevs_operational": 2, 00:20:21.095 "base_bdevs_list": [ 00:20:21.095 { 00:20:21.095 "name": "pt1", 00:20:21.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.095 "is_configured": true, 00:20:21.095 "data_offset": 256, 00:20:21.095 "data_size": 7936 00:20:21.095 }, 00:20:21.095 { 00:20:21.095 "name": "pt2", 00:20:21.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.095 "is_configured": true, 00:20:21.095 "data_offset": 256, 00:20:21.095 "data_size": 7936 00:20:21.095 } 00:20:21.095 ] 00:20:21.095 } 00:20:21.095 } 00:20:21.095 }' 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:21.095 pt2' 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:21.095 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.353 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.354 "name": "pt1", 00:20:21.354 "aliases": [ 00:20:21.354 "00000000-0000-0000-0000-000000000001" 00:20:21.354 ], 00:20:21.354 "product_name": "passthru", 00:20:21.354 "block_size": 4128, 00:20:21.354 "num_blocks": 8192, 00:20:21.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.354 "md_size": 32, 00:20:21.354 "md_interleave": true, 00:20:21.354 "dif_type": 0, 00:20:21.354 "assigned_rate_limits": { 00:20:21.354 "rw_ios_per_sec": 0, 00:20:21.354 "rw_mbytes_per_sec": 0, 00:20:21.354 "r_mbytes_per_sec": 0, 00:20:21.354 "w_mbytes_per_sec": 0 00:20:21.354 }, 00:20:21.354 "claimed": true, 00:20:21.354 "claim_type": "exclusive_write", 00:20:21.354 "zoned": false, 00:20:21.354 "supported_io_types": { 00:20:21.354 "read": true, 00:20:21.354 "write": true, 00:20:21.354 "unmap": true, 00:20:21.354 "flush": true, 00:20:21.354 "reset": true, 00:20:21.354 "nvme_admin": false, 00:20:21.354 "nvme_io": false, 00:20:21.354 "nvme_io_md": false, 00:20:21.354 "write_zeroes": true, 00:20:21.354 "zcopy": true, 00:20:21.354 "get_zone_info": false, 00:20:21.354 "zone_management": false, 00:20:21.354 "zone_append": false, 00:20:21.354 "compare": false, 00:20:21.354 "compare_and_write": false, 00:20:21.354 "abort": true, 00:20:21.354 "seek_hole": false, 00:20:21.354 "seek_data": false, 00:20:21.354 "copy": true, 00:20:21.354 "nvme_iov_md": false 00:20:21.354 }, 00:20:21.354 "memory_domains": [ 00:20:21.354 { 00:20:21.354 "dma_device_id": "system", 00:20:21.354 "dma_device_type": 1 00:20:21.354 }, 00:20:21.354 { 00:20:21.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.354 "dma_device_type": 2 00:20:21.354 } 00:20:21.354 ], 00:20:21.354 "driver_specific": { 00:20:21.354 "passthru": { 00:20:21.354 "name": "pt1", 00:20:21.354 "base_bdev_name": "malloc1" 00:20:21.354 } 00:20:21.354 } 00:20:21.354 }' 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:20:21.354 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.612 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.612 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:21.612 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.612 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:21.612 06:33:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.869 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.869 "name": "pt2", 00:20:21.869 "aliases": [ 00:20:21.869 "00000000-0000-0000-0000-000000000002" 00:20:21.869 ], 00:20:21.869 "product_name": "passthru", 00:20:21.869 "block_size": 4128, 00:20:21.869 "num_blocks": 8192, 00:20:21.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.869 "md_size": 32, 00:20:21.869 "md_interleave": true, 00:20:21.869 "dif_type": 0, 00:20:21.869 "assigned_rate_limits": { 00:20:21.869 "rw_ios_per_sec": 0, 00:20:21.869 "rw_mbytes_per_sec": 0, 00:20:21.870 "r_mbytes_per_sec": 0, 00:20:21.870 "w_mbytes_per_sec": 0 00:20:21.870 }, 00:20:21.870 "claimed": true, 00:20:21.870 "claim_type": "exclusive_write", 00:20:21.870 "zoned": false, 00:20:21.870 "supported_io_types": { 00:20:21.870 "read": true, 00:20:21.870 "write": true, 00:20:21.870 "unmap": true, 00:20:21.870 "flush": true, 00:20:21.870 "reset": true, 00:20:21.870 "nvme_admin": false, 00:20:21.870 "nvme_io": false, 00:20:21.870 "nvme_io_md": false, 00:20:21.870 "write_zeroes": true, 00:20:21.870 "zcopy": true, 00:20:21.870 "get_zone_info": false, 00:20:21.870 "zone_management": false, 00:20:21.870 "zone_append": false, 00:20:21.870 "compare": false, 00:20:21.870 "compare_and_write": false, 00:20:21.870 "abort": true, 00:20:21.870 "seek_hole": false, 00:20:21.870 "seek_data": false, 00:20:21.870 "copy": true, 00:20:21.870 "nvme_iov_md": false 00:20:21.870 }, 00:20:21.870 "memory_domains": [ 00:20:21.870 { 00:20:21.870 "dma_device_id": "system", 00:20:21.870 "dma_device_type": 1 00:20:21.870 }, 00:20:21.870 { 00:20:21.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.870 "dma_device_type": 2 00:20:21.870 } 00:20:21.870 ], 00:20:21.870 "driver_specific": { 00:20:21.870 "passthru": { 00:20:21.870 "name": "pt2", 00:20:21.870 "base_bdev_name": "malloc2" 00:20:21.870 } 00:20:21.870 } 00:20:21.870 }' 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:21.870 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:22.128 [2024-07-23 06:33:34.507178] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.128 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7b591736-48bd-11ef-a06c-59ddad71024c 00:20:22.128 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 7b591736-48bd-11ef-a06c-59ddad71024c ']' 00:20:22.128 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:22.386 [2024-07-23 06:33:34.771155] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.386 [2024-07-23 06:33:34.771178] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.386 [2024-07-23 06:33:34.771233] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.386 [2024-07-23 06:33:34.771249] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.386 [2024-07-23 06:33:34.771253] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98687434f00 name raid_bdev1, state offline 00:20:22.386 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.386 06:33:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:22.644 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:22.644 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:22.644 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.644 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:22.902 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.902 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:23.160 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:23.160 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:23.727 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:23.727 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:23.727 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:20:23.728 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:23.728 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.728 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.728 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.728 06:33:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:23.728 [2024-07-23 06:33:36.231367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:23.728 [2024-07-23 06:33:36.231983] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:23.728 [2024-07-23 06:33:36.232016] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:23.728 [2024-07-23 06:33:36.232075] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:23.728 [2024-07-23 06:33:36.232087] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.728 [2024-07-23 06:33:36.232092] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98687434c80 name raid_bdev1, state configuring 00:20:23.728 request: 00:20:23.728 { 00:20:23.728 "name": "raid_bdev1", 00:20:23.728 "raid_level": "raid1", 00:20:23.728 "base_bdevs": [ 00:20:23.728 "malloc1", 00:20:23.728 "malloc2" 00:20:23.728 ], 00:20:23.728 "superblock": false, 00:20:23.728 "method": "bdev_raid_create", 00:20:23.728 "req_id": 1 00:20:23.728 } 00:20:23.728 Got JSON-RPC error response 00:20:23.728 response: 00:20:23.728 { 00:20:23.728 "code": -17, 00:20:23.728 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:23.728 } 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.728 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:24.295 [2024-07-23 06:33:36.755366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:24.295 [2024-07-23 06:33:36.755438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.295 [2024-07-23 06:33:36.755466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98687434780 00:20:24.295 [2024-07-23 06:33:36.755474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.295 [2024-07-23 06:33:36.756114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.295 [2024-07-23 06:33:36.756140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:24.295 [2024-07-23 06:33:36.756159] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:24.295 [2024-07-23 06:33:36.756172] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:24.295 pt1 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.295 06:33:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.570 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.570 "name": "raid_bdev1", 00:20:24.570 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:24.570 "strip_size_kb": 0, 00:20:24.570 "state": "configuring", 00:20:24.570 "raid_level": "raid1", 00:20:24.570 "superblock": true, 00:20:24.570 "num_base_bdevs": 2, 00:20:24.570 "num_base_bdevs_discovered": 1, 00:20:24.570 "num_base_bdevs_operational": 2, 00:20:24.570 "base_bdevs_list": [ 00:20:24.570 { 00:20:24.570 "name": "pt1", 00:20:24.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.570 "is_configured": true, 00:20:24.570 "data_offset": 256, 00:20:24.570 "data_size": 7936 00:20:24.570 }, 00:20:24.570 { 00:20:24.570 "name": null, 00:20:24.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.570 "is_configured": false, 00:20:24.570 "data_offset": 256, 00:20:24.570 "data_size": 7936 00:20:24.570 } 00:20:24.570 ] 00:20:24.570 }' 00:20:24.570 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.570 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.845 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:20:24.845 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:24.845 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:24.845 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.104 [2024-07-23 06:33:37.579415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.104 [2024-07-23 06:33:37.579500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.104 [2024-07-23 06:33:37.579511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98687434f00 00:20:25.104 [2024-07-23 06:33:37.579519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.104 [2024-07-23 06:33:37.579573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.104 [2024-07-23 06:33:37.579582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.104 [2024-07-23 06:33:37.579599] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:25.104 [2024-07-23 06:33:37.579607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.104 [2024-07-23 06:33:37.579643] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x98687435180 00:20:25.104 [2024-07-23 06:33:37.579646] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:25.104 [2024-07-23 06:33:37.579663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98687497e20 00:20:25.104 [2024-07-23 06:33:37.579675] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x98687435180 00:20:25.104 [2024-07-23 06:33:37.579679] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x98687435180 00:20:25.104 [2024-07-23 06:33:37.579689] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.104 pt2 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.104 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.362 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.362 "name": "raid_bdev1", 00:20:25.362 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:25.362 "strip_size_kb": 0, 00:20:25.362 "state": "online", 00:20:25.362 "raid_level": "raid1", 00:20:25.362 "superblock": true, 00:20:25.362 "num_base_bdevs": 2, 00:20:25.362 "num_base_bdevs_discovered": 2, 00:20:25.362 "num_base_bdevs_operational": 2, 00:20:25.362 "base_bdevs_list": [ 00:20:25.362 { 00:20:25.362 "name": "pt1", 00:20:25.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.362 "is_configured": true, 00:20:25.362 "data_offset": 256, 00:20:25.362 "data_size": 7936 00:20:25.362 }, 00:20:25.362 { 00:20:25.362 "name": "pt2", 00:20:25.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.362 "is_configured": true, 00:20:25.362 "data_offset": 256, 00:20:25.362 "data_size": 7936 00:20:25.362 } 00:20:25.362 ] 00:20:25.362 }' 00:20:25.362 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.362 06:33:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:25.930 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:25.930 [2024-07-23 06:33:38.447526] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:26.189 "name": "raid_bdev1", 00:20:26.189 "aliases": [ 00:20:26.189 "7b591736-48bd-11ef-a06c-59ddad71024c" 00:20:26.189 ], 00:20:26.189 "product_name": "Raid Volume", 00:20:26.189 "block_size": 4128, 00:20:26.189 "num_blocks": 7936, 00:20:26.189 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:26.189 "md_size": 32, 00:20:26.189 "md_interleave": true, 00:20:26.189 "dif_type": 0, 00:20:26.189 "assigned_rate_limits": { 00:20:26.189 "rw_ios_per_sec": 0, 00:20:26.189 "rw_mbytes_per_sec": 0, 00:20:26.189 "r_mbytes_per_sec": 0, 00:20:26.189 "w_mbytes_per_sec": 0 00:20:26.189 }, 00:20:26.189 "claimed": false, 00:20:26.189 "zoned": false, 00:20:26.189 "supported_io_types": { 00:20:26.189 "read": true, 00:20:26.189 "write": true, 00:20:26.189 "unmap": false, 00:20:26.189 "flush": false, 00:20:26.189 "reset": true, 00:20:26.189 "nvme_admin": false, 00:20:26.189 "nvme_io": false, 00:20:26.189 "nvme_io_md": false, 00:20:26.189 "write_zeroes": true, 00:20:26.189 "zcopy": false, 00:20:26.189 "get_zone_info": false, 00:20:26.189 "zone_management": false, 00:20:26.189 "zone_append": false, 00:20:26.189 "compare": false, 00:20:26.189 "compare_and_write": false, 00:20:26.189 "abort": false, 00:20:26.189 "seek_hole": false, 00:20:26.189 "seek_data": false, 00:20:26.189 "copy": false, 00:20:26.189 "nvme_iov_md": false 00:20:26.189 }, 00:20:26.189 "memory_domains": [ 00:20:26.189 { 00:20:26.189 "dma_device_id": "system", 00:20:26.189 "dma_device_type": 1 00:20:26.189 }, 00:20:26.189 { 00:20:26.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.189 "dma_device_type": 2 00:20:26.189 }, 00:20:26.189 { 00:20:26.189 "dma_device_id": "system", 00:20:26.189 "dma_device_type": 1 00:20:26.189 }, 00:20:26.189 { 00:20:26.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.189 "dma_device_type": 2 00:20:26.189 } 00:20:26.189 ], 00:20:26.189 "driver_specific": { 00:20:26.189 "raid": { 00:20:26.189 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:26.189 "strip_size_kb": 0, 00:20:26.189 "state": "online", 00:20:26.189 "raid_level": "raid1", 00:20:26.189 "superblock": true, 00:20:26.189 "num_base_bdevs": 2, 00:20:26.189 "num_base_bdevs_discovered": 2, 00:20:26.189 "num_base_bdevs_operational": 2, 00:20:26.189 "base_bdevs_list": [ 00:20:26.189 { 00:20:26.189 "name": "pt1", 00:20:26.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.189 "is_configured": true, 00:20:26.189 "data_offset": 256, 00:20:26.189 "data_size": 7936 00:20:26.189 }, 00:20:26.189 { 00:20:26.189 "name": "pt2", 00:20:26.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.189 "is_configured": true, 00:20:26.189 "data_offset": 256, 00:20:26.189 "data_size": 7936 00:20:26.189 } 00:20:26.189 ] 00:20:26.189 } 00:20:26.189 } 00:20:26.189 }' 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:26.189 pt2' 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.189 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.189 "name": "pt1", 00:20:26.189 "aliases": [ 00:20:26.189 "00000000-0000-0000-0000-000000000001" 00:20:26.189 ], 00:20:26.189 "product_name": "passthru", 00:20:26.189 "block_size": 4128, 00:20:26.189 "num_blocks": 8192, 00:20:26.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.189 "md_size": 32, 00:20:26.189 "md_interleave": true, 00:20:26.189 "dif_type": 0, 00:20:26.189 "assigned_rate_limits": { 00:20:26.189 "rw_ios_per_sec": 0, 00:20:26.189 "rw_mbytes_per_sec": 0, 00:20:26.189 "r_mbytes_per_sec": 0, 00:20:26.189 "w_mbytes_per_sec": 0 00:20:26.189 }, 00:20:26.189 "claimed": true, 00:20:26.189 "claim_type": "exclusive_write", 00:20:26.189 "zoned": false, 00:20:26.189 "supported_io_types": { 00:20:26.189 "read": true, 00:20:26.189 "write": true, 00:20:26.189 "unmap": true, 00:20:26.189 "flush": true, 00:20:26.189 "reset": true, 00:20:26.189 "nvme_admin": false, 00:20:26.189 "nvme_io": false, 00:20:26.189 "nvme_io_md": false, 00:20:26.189 "write_zeroes": true, 00:20:26.189 "zcopy": true, 00:20:26.189 "get_zone_info": false, 00:20:26.189 "zone_management": false, 00:20:26.189 "zone_append": false, 00:20:26.189 "compare": false, 00:20:26.189 "compare_and_write": false, 00:20:26.189 "abort": true, 00:20:26.189 "seek_hole": false, 00:20:26.189 "seek_data": false, 00:20:26.189 "copy": true, 00:20:26.189 "nvme_iov_md": false 00:20:26.189 }, 00:20:26.189 "memory_domains": [ 00:20:26.189 { 00:20:26.189 "dma_device_id": "system", 00:20:26.189 "dma_device_type": 1 00:20:26.189 }, 00:20:26.189 { 00:20:26.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.189 "dma_device_type": 2 00:20:26.189 } 00:20:26.190 ], 00:20:26.190 "driver_specific": { 00:20:26.190 "passthru": { 00:20:26.190 "name": "pt1", 00:20:26.190 "base_bdev_name": "malloc1" 00:20:26.190 } 00:20:26.190 } 00:20:26.190 }' 00:20:26.190 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:26.448 06:33:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.707 "name": "pt2", 00:20:26.707 "aliases": [ 00:20:26.707 "00000000-0000-0000-0000-000000000002" 00:20:26.707 ], 00:20:26.707 "product_name": "passthru", 00:20:26.707 "block_size": 4128, 00:20:26.707 "num_blocks": 8192, 00:20:26.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.707 "md_size": 32, 00:20:26.707 "md_interleave": true, 00:20:26.707 "dif_type": 0, 00:20:26.707 "assigned_rate_limits": { 00:20:26.707 "rw_ios_per_sec": 0, 00:20:26.707 "rw_mbytes_per_sec": 0, 00:20:26.707 "r_mbytes_per_sec": 0, 00:20:26.707 "w_mbytes_per_sec": 0 00:20:26.707 }, 00:20:26.707 "claimed": true, 00:20:26.707 "claim_type": "exclusive_write", 00:20:26.707 "zoned": false, 00:20:26.707 "supported_io_types": { 00:20:26.707 "read": true, 00:20:26.707 "write": true, 00:20:26.707 "unmap": true, 00:20:26.707 "flush": true, 00:20:26.707 "reset": true, 00:20:26.707 "nvme_admin": false, 00:20:26.707 "nvme_io": false, 00:20:26.707 "nvme_io_md": false, 00:20:26.707 "write_zeroes": true, 00:20:26.707 "zcopy": true, 00:20:26.707 "get_zone_info": false, 00:20:26.707 "zone_management": false, 00:20:26.707 "zone_append": false, 00:20:26.707 "compare": false, 00:20:26.707 "compare_and_write": false, 00:20:26.707 "abort": true, 00:20:26.707 "seek_hole": false, 00:20:26.707 "seek_data": false, 00:20:26.707 "copy": true, 00:20:26.707 "nvme_iov_md": false 00:20:26.707 }, 00:20:26.707 "memory_domains": [ 00:20:26.707 { 00:20:26.707 "dma_device_id": "system", 00:20:26.707 "dma_device_type": 1 00:20:26.707 }, 00:20:26.707 { 00:20:26.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.707 "dma_device_type": 2 00:20:26.707 } 00:20:26.707 ], 00:20:26.707 "driver_specific": { 00:20:26.707 "passthru": { 00:20:26.707 "name": "pt2", 00:20:26.707 "base_bdev_name": "malloc2" 00:20:26.707 } 00:20:26.707 } 00:20:26.707 }' 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:26.707 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:26.966 [2024-07-23 06:33:39.315623] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.966 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 7b591736-48bd-11ef-a06c-59ddad71024c '!=' 7b591736-48bd-11ef-a06c-59ddad71024c ']' 00:20:26.966 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:26.966 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:26.966 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:20:26.966 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:27.224 [2024-07-23 06:33:39.535591] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.224 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.483 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.483 "name": "raid_bdev1", 00:20:27.483 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:27.483 "strip_size_kb": 0, 00:20:27.483 "state": "online", 00:20:27.483 "raid_level": "raid1", 00:20:27.483 "superblock": true, 00:20:27.483 "num_base_bdevs": 2, 00:20:27.483 "num_base_bdevs_discovered": 1, 00:20:27.483 "num_base_bdevs_operational": 1, 00:20:27.483 "base_bdevs_list": [ 00:20:27.483 { 00:20:27.483 "name": null, 00:20:27.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.483 "is_configured": false, 00:20:27.483 "data_offset": 256, 00:20:27.483 "data_size": 7936 00:20:27.483 }, 00:20:27.483 { 00:20:27.483 "name": "pt2", 00:20:27.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:27.483 "is_configured": true, 00:20:27.483 "data_offset": 256, 00:20:27.483 "data_size": 7936 00:20:27.483 } 00:20:27.483 ] 00:20:27.483 }' 00:20:27.483 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.483 06:33:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.742 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:28.032 [2024-07-23 06:33:40.359670] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.032 [2024-07-23 06:33:40.359697] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.032 [2024-07-23 06:33:40.359727] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.032 [2024-07-23 06:33:40.359739] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.032 [2024-07-23 06:33:40.359744] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98687435180 name raid_bdev1, state offline 00:20:28.032 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:28.032 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.294 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:28.294 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:28.294 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:28.294 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:28.294 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:28.553 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:28.553 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:28.553 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:28.553 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:28.553 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:20:28.553 06:33:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:28.811 [2024-07-23 06:33:41.115754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:28.811 [2024-07-23 06:33:41.115822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.811 [2024-07-23 06:33:41.115835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98687434f00 00:20:28.811 [2024-07-23 06:33:41.115843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.811 [2024-07-23 06:33:41.116449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.811 [2024-07-23 06:33:41.116479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:28.811 [2024-07-23 06:33:41.116499] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:28.811 [2024-07-23 06:33:41.116512] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:28.811 [2024-07-23 06:33:41.116532] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x98687435180 00:20:28.811 [2024-07-23 06:33:41.116536] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:28.811 [2024-07-23 06:33:41.116556] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98687497e20 00:20:28.811 [2024-07-23 06:33:41.116569] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x98687435180 00:20:28.811 [2024-07-23 06:33:41.116572] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x98687435180 00:20:28.811 [2024-07-23 06:33:41.116584] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.811 pt2 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.811 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.070 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.070 "name": "raid_bdev1", 00:20:29.070 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:29.070 "strip_size_kb": 0, 00:20:29.070 "state": "online", 00:20:29.070 "raid_level": "raid1", 00:20:29.070 "superblock": true, 00:20:29.070 "num_base_bdevs": 2, 00:20:29.070 "num_base_bdevs_discovered": 1, 00:20:29.070 "num_base_bdevs_operational": 1, 00:20:29.070 "base_bdevs_list": [ 00:20:29.070 { 00:20:29.070 "name": null, 00:20:29.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.070 "is_configured": false, 00:20:29.070 "data_offset": 256, 00:20:29.070 "data_size": 7936 00:20:29.070 }, 00:20:29.070 { 00:20:29.070 "name": "pt2", 00:20:29.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.070 "is_configured": true, 00:20:29.070 "data_offset": 256, 00:20:29.070 "data_size": 7936 00:20:29.070 } 00:20:29.070 ] 00:20:29.070 }' 00:20:29.070 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.070 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:29.587 [2024-07-23 06:33:41.975872] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.587 [2024-07-23 06:33:41.975900] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.587 [2024-07-23 06:33:41.975939] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.587 [2024-07-23 06:33:41.975950] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.587 [2024-07-23 06:33:41.975955] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98687435180 name raid_bdev1, state offline 00:20:29.587 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.587 06:33:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:29.845 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:29.845 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:29.845 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:20:29.845 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:30.104 [2024-07-23 06:33:42.491906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:30.104 [2024-07-23 06:33:42.491983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.104 [2024-07-23 06:33:42.492012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98687434c80 00:20:30.104 [2024-07-23 06:33:42.492020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.104 [2024-07-23 06:33:42.492613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.104 [2024-07-23 06:33:42.492646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:30.104 [2024-07-23 06:33:42.492667] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:30.104 [2024-07-23 06:33:42.492679] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:30.104 [2024-07-23 06:33:42.492701] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:30.104 [2024-07-23 06:33:42.492706] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.104 [2024-07-23 06:33:42.492712] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98687434780 name raid_bdev1, state configuring 00:20:30.104 [2024-07-23 06:33:42.492722] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:30.104 [2024-07-23 06:33:42.492737] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x98687434780 00:20:30.104 [2024-07-23 06:33:42.492741] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:30.104 [2024-07-23 06:33:42.492761] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98687497e20 00:20:30.104 [2024-07-23 06:33:42.492774] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x98687434780 00:20:30.104 [2024-07-23 06:33:42.492777] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x98687434780 00:20:30.104 [2024-07-23 06:33:42.492788] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.104 pt1 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.104 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.362 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.362 "name": "raid_bdev1", 00:20:30.362 "uuid": "7b591736-48bd-11ef-a06c-59ddad71024c", 00:20:30.362 "strip_size_kb": 0, 00:20:30.362 "state": "online", 00:20:30.362 "raid_level": "raid1", 00:20:30.362 "superblock": true, 00:20:30.362 "num_base_bdevs": 2, 00:20:30.362 "num_base_bdevs_discovered": 1, 00:20:30.362 "num_base_bdevs_operational": 1, 00:20:30.362 "base_bdevs_list": [ 00:20:30.362 { 00:20:30.362 "name": null, 00:20:30.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.362 "is_configured": false, 00:20:30.362 "data_offset": 256, 00:20:30.362 "data_size": 7936 00:20:30.362 }, 00:20:30.362 { 00:20:30.362 "name": "pt2", 00:20:30.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.362 "is_configured": true, 00:20:30.362 "data_offset": 256, 00:20:30.362 "data_size": 7936 00:20:30.362 } 00:20:30.362 ] 00:20:30.362 }' 00:20:30.362 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.362 06:33:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.621 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:30.621 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:30.885 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:30.885 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:30.885 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:31.145 [2024-07-23 06:33:43.596032] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 7b591736-48bd-11ef-a06c-59ddad71024c '!=' 7b591736-48bd-11ef-a06c-59ddad71024c ']' 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 67222 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67222 ']' 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67222 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67222 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:20:31.145 killing process with pid 67222 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67222' 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 67222 00:20:31.145 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 67222 00:20:31.145 [2024-07-23 06:33:43.622794] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:31.145 [2024-07-23 06:33:43.622821] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.145 [2024-07-23 06:33:43.622834] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.145 [2024-07-23 06:33:43.622838] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98687434780 name raid_bdev1, state offline 00:20:31.145 [2024-07-23 06:33:43.634439] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:31.404 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:20:31.404 00:20:31.404 real 0m13.456s 00:20:31.404 user 0m24.071s 00:20:31.404 sys 0m2.030s 00:20:31.404 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.404 06:33:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.404 ************************************ 00:20:31.404 END TEST raid_superblock_test_md_interleaved 00:20:31.404 ************************************ 00:20:31.404 06:33:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:31.404 06:33:43 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:31.404 06:33:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:31.404 06:33:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.404 06:33:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.404 ************************************ 00:20:31.404 START TEST raid_rebuild_test_sb_md_interleaved 00:20:31.404 ************************************ 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67613 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67613 /var/tmp/spdk-raid.sock 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:31.404 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67613 ']' 00:20:31.405 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:31.405 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:31.405 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:31.405 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.405 06:33:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.405 [2024-07-23 06:33:43.873083] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:31.405 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:31.405 Zero copy mechanism will not be used. 00:20:31.405 [2024-07-23 06:33:43.873351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:31.972 EAL: TSC is not safe to use in SMP mode 00:20:31.972 EAL: TSC is not invariant 00:20:31.972 [2024-07-23 06:33:44.387897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.972 [2024-07-23 06:33:44.469504] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:31.972 [2024-07-23 06:33:44.471668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.972 [2024-07-23 06:33:44.472525] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.972 [2024-07-23 06:33:44.472539] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:32.539 06:33:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.539 06:33:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:20:32.539 06:33:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:32.539 06:33:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:32.797 BaseBdev1_malloc 00:20:32.797 06:33:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:33.055 [2024-07-23 06:33:45.511581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:33.055 [2024-07-23 06:33:45.511676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.055 [2024-07-23 06:33:45.512339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b5fc34780 00:20:33.055 [2024-07-23 06:33:45.512401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.055 [2024-07-23 06:33:45.513137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.055 [2024-07-23 06:33:45.513177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:33.055 BaseBdev1 00:20:33.055 06:33:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:33.055 06:33:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:33.314 BaseBdev2_malloc 00:20:33.314 06:33:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:33.572 [2024-07-23 06:33:45.983608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:33.572 [2024-07-23 06:33:45.983670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.572 [2024-07-23 06:33:45.983711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b5fc34c80 00:20:33.572 [2024-07-23 06:33:45.983719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.572 [2024-07-23 06:33:45.984367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.572 [2024-07-23 06:33:45.984392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:33.572 BaseBdev2 00:20:33.572 06:33:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:33.830 spare_malloc 00:20:33.830 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:34.088 spare_delay 00:20:34.088 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:34.347 [2024-07-23 06:33:46.643660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.347 [2024-07-23 06:33:46.643721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.347 [2024-07-23 06:33:46.643761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b5fc35400 00:20:34.347 [2024-07-23 06:33:46.643768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.347 [2024-07-23 06:33:46.644460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.347 [2024-07-23 06:33:46.644483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.347 spare 00:20:34.347 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:34.347 [2024-07-23 06:33:46.867683] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.347 [2024-07-23 06:33:46.868347] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.347 [2024-07-23 06:33:46.868455] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd5b5fc35680 00:20:34.347 [2024-07-23 06:33:46.868461] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:34.347 [2024-07-23 06:33:46.868492] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5b5fc97e20 00:20:34.347 [2024-07-23 06:33:46.868506] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd5b5fc35680 00:20:34.347 [2024-07-23 06:33:46.868509] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd5b5fc35680 00:20:34.347 [2024-07-23 06:33:46.868522] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.606 06:33:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.865 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.865 "name": "raid_bdev1", 00:20:34.865 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:34.865 "strip_size_kb": 0, 00:20:34.865 "state": "online", 00:20:34.865 "raid_level": "raid1", 00:20:34.865 "superblock": true, 00:20:34.865 "num_base_bdevs": 2, 00:20:34.865 "num_base_bdevs_discovered": 2, 00:20:34.865 "num_base_bdevs_operational": 2, 00:20:34.865 "base_bdevs_list": [ 00:20:34.865 { 00:20:34.865 "name": "BaseBdev1", 00:20:34.865 "uuid": "62e0b3e7-c36a-6d5a-9b24-c491cda93c48", 00:20:34.865 "is_configured": true, 00:20:34.865 "data_offset": 256, 00:20:34.865 "data_size": 7936 00:20:34.865 }, 00:20:34.865 { 00:20:34.865 "name": "BaseBdev2", 00:20:34.865 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:34.865 "is_configured": true, 00:20:34.865 "data_offset": 256, 00:20:34.865 "data_size": 7936 00:20:34.865 } 00:20:34.865 ] 00:20:34.865 }' 00:20:34.865 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.865 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.150 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:35.150 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:35.409 [2024-07-23 06:33:47.683789] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.409 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:20:35.409 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.409 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:35.667 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:20:35.667 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:35.667 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:20:35.667 06:33:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:35.667 [2024-07-23 06:33:48.147791] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.667 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.924 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.924 "name": "raid_bdev1", 00:20:35.924 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:35.924 "strip_size_kb": 0, 00:20:35.924 "state": "online", 00:20:35.924 "raid_level": "raid1", 00:20:35.924 "superblock": true, 00:20:35.924 "num_base_bdevs": 2, 00:20:35.924 "num_base_bdevs_discovered": 1, 00:20:35.924 "num_base_bdevs_operational": 1, 00:20:35.924 "base_bdevs_list": [ 00:20:35.924 { 00:20:35.924 "name": null, 00:20:35.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.924 "is_configured": false, 00:20:35.924 "data_offset": 256, 00:20:35.924 "data_size": 7936 00:20:35.924 }, 00:20:35.924 { 00:20:35.924 "name": "BaseBdev2", 00:20:35.924 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:35.924 "is_configured": true, 00:20:35.924 "data_offset": 256, 00:20:35.924 "data_size": 7936 00:20:35.924 } 00:20:35.924 ] 00:20:35.924 }' 00:20:35.924 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.924 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.182 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.747 [2024-07-23 06:33:48.967830] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.747 [2024-07-23 06:33:48.968069] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5b5fc97ec0 00:20:36.747 [2024-07-23 06:33:48.968940] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.747 06:33:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.681 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.938 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.938 "name": "raid_bdev1", 00:20:37.938 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:37.938 "strip_size_kb": 0, 00:20:37.938 "state": "online", 00:20:37.938 "raid_level": "raid1", 00:20:37.938 "superblock": true, 00:20:37.938 "num_base_bdevs": 2, 00:20:37.938 "num_base_bdevs_discovered": 2, 00:20:37.938 "num_base_bdevs_operational": 2, 00:20:37.938 "process": { 00:20:37.938 "type": "rebuild", 00:20:37.938 "target": "spare", 00:20:37.938 "progress": { 00:20:37.938 "blocks": 3328, 00:20:37.938 "percent": 41 00:20:37.938 } 00:20:37.938 }, 00:20:37.938 "base_bdevs_list": [ 00:20:37.938 { 00:20:37.938 "name": "spare", 00:20:37.938 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:37.938 "is_configured": true, 00:20:37.938 "data_offset": 256, 00:20:37.938 "data_size": 7936 00:20:37.938 }, 00:20:37.938 { 00:20:37.938 "name": "BaseBdev2", 00:20:37.938 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:37.938 "is_configured": true, 00:20:37.938 "data_offset": 256, 00:20:37.938 "data_size": 7936 00:20:37.938 } 00:20:37.938 ] 00:20:37.938 }' 00:20:37.938 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:37.938 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.938 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:37.938 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.938 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:38.211 [2024-07-23 06:33:50.544394] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.211 [2024-07-23 06:33:50.576376] bdev_raid.c:2544:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:20:38.211 [2024-07-23 06:33:50.576423] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.211 [2024-07-23 06:33:50.576429] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.211 [2024-07-23 06:33:50.576433] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.211 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.212 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.212 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.212 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.490 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.490 "name": "raid_bdev1", 00:20:38.490 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:38.490 "strip_size_kb": 0, 00:20:38.490 "state": "online", 00:20:38.490 "raid_level": "raid1", 00:20:38.490 "superblock": true, 00:20:38.490 "num_base_bdevs": 2, 00:20:38.490 "num_base_bdevs_discovered": 1, 00:20:38.490 "num_base_bdevs_operational": 1, 00:20:38.490 "base_bdevs_list": [ 00:20:38.490 { 00:20:38.490 "name": null, 00:20:38.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.490 "is_configured": false, 00:20:38.490 "data_offset": 256, 00:20:38.490 "data_size": 7936 00:20:38.490 }, 00:20:38.490 { 00:20:38.490 "name": "BaseBdev2", 00:20:38.490 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:38.490 "is_configured": true, 00:20:38.490 "data_offset": 256, 00:20:38.490 "data_size": 7936 00:20:38.490 } 00:20:38.490 ] 00:20:38.490 }' 00:20:38.490 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.491 06:33:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.749 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.007 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:39.007 "name": "raid_bdev1", 00:20:39.007 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:39.007 "strip_size_kb": 0, 00:20:39.007 "state": "online", 00:20:39.007 "raid_level": "raid1", 00:20:39.007 "superblock": true, 00:20:39.007 "num_base_bdevs": 2, 00:20:39.007 "num_base_bdevs_discovered": 1, 00:20:39.008 "num_base_bdevs_operational": 1, 00:20:39.008 "base_bdevs_list": [ 00:20:39.008 { 00:20:39.008 "name": null, 00:20:39.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.008 "is_configured": false, 00:20:39.008 "data_offset": 256, 00:20:39.008 "data_size": 7936 00:20:39.008 }, 00:20:39.008 { 00:20:39.008 "name": "BaseBdev2", 00:20:39.008 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:39.008 "is_configured": true, 00:20:39.008 "data_offset": 256, 00:20:39.008 "data_size": 7936 00:20:39.008 } 00:20:39.008 ] 00:20:39.008 }' 00:20:39.008 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:39.008 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:39.008 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:39.008 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:39.008 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:39.265 [2024-07-23 06:33:51.712504] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.265 [2024-07-23 06:33:51.712731] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5b5fc97e20 00:20:39.265 [2024-07-23 06:33:51.713552] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.265 06:33:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.638 06:33:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.638 "name": "raid_bdev1", 00:20:40.638 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:40.638 "strip_size_kb": 0, 00:20:40.638 "state": "online", 00:20:40.638 "raid_level": "raid1", 00:20:40.638 "superblock": true, 00:20:40.638 "num_base_bdevs": 2, 00:20:40.638 "num_base_bdevs_discovered": 2, 00:20:40.638 "num_base_bdevs_operational": 2, 00:20:40.638 "process": { 00:20:40.638 "type": "rebuild", 00:20:40.638 "target": "spare", 00:20:40.638 "progress": { 00:20:40.638 "blocks": 3328, 00:20:40.638 "percent": 41 00:20:40.638 } 00:20:40.638 }, 00:20:40.638 "base_bdevs_list": [ 00:20:40.638 { 00:20:40.638 "name": "spare", 00:20:40.638 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:40.638 "is_configured": true, 00:20:40.638 "data_offset": 256, 00:20:40.638 "data_size": 7936 00:20:40.638 }, 00:20:40.638 { 00:20:40.638 "name": "BaseBdev2", 00:20:40.638 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:40.638 "is_configured": true, 00:20:40.638 "data_offset": 256, 00:20:40.638 "data_size": 7936 00:20:40.638 } 00:20:40.638 ] 00:20:40.638 }' 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:20:40.638 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:20:40.638 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=730 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.639 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.899 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.899 "name": "raid_bdev1", 00:20:40.899 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:40.899 "strip_size_kb": 0, 00:20:40.899 "state": "online", 00:20:40.899 "raid_level": "raid1", 00:20:40.899 "superblock": true, 00:20:40.899 "num_base_bdevs": 2, 00:20:40.899 "num_base_bdevs_discovered": 2, 00:20:40.899 "num_base_bdevs_operational": 2, 00:20:40.899 "process": { 00:20:40.899 "type": "rebuild", 00:20:40.899 "target": "spare", 00:20:40.899 "progress": { 00:20:40.899 "blocks": 3840, 00:20:40.899 "percent": 48 00:20:40.899 } 00:20:40.899 }, 00:20:40.899 "base_bdevs_list": [ 00:20:40.899 { 00:20:40.899 "name": "spare", 00:20:40.899 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:40.899 "is_configured": true, 00:20:40.899 "data_offset": 256, 00:20:40.899 "data_size": 7936 00:20:40.899 }, 00:20:40.899 { 00:20:40.899 "name": "BaseBdev2", 00:20:40.899 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:40.899 "is_configured": true, 00:20:40.899 "data_offset": 256, 00:20:40.899 "data_size": 7936 00:20:40.899 } 00:20:40.899 ] 00:20:40.899 }' 00:20:40.899 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:40.899 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.899 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:40.899 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.899 06:33:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.274 "name": "raid_bdev1", 00:20:42.274 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:42.274 "strip_size_kb": 0, 00:20:42.274 "state": "online", 00:20:42.274 "raid_level": "raid1", 00:20:42.274 "superblock": true, 00:20:42.274 "num_base_bdevs": 2, 00:20:42.274 "num_base_bdevs_discovered": 2, 00:20:42.274 "num_base_bdevs_operational": 2, 00:20:42.274 "process": { 00:20:42.274 "type": "rebuild", 00:20:42.274 "target": "spare", 00:20:42.274 "progress": { 00:20:42.274 "blocks": 7424, 00:20:42.274 "percent": 93 00:20:42.274 } 00:20:42.274 }, 00:20:42.274 "base_bdevs_list": [ 00:20:42.274 { 00:20:42.274 "name": "spare", 00:20:42.274 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:42.274 "is_configured": true, 00:20:42.274 "data_offset": 256, 00:20:42.274 "data_size": 7936 00:20:42.274 }, 00:20:42.274 { 00:20:42.274 "name": "BaseBdev2", 00:20:42.274 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:42.274 "is_configured": true, 00:20:42.274 "data_offset": 256, 00:20:42.274 "data_size": 7936 00:20:42.274 } 00:20:42.274 ] 00:20:42.274 }' 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.274 06:33:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:42.544 [2024-07-23 06:33:54.827502] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:42.544 [2024-07-23 06:33:54.827535] bdev_raid.c:2534:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:42.544 [2024-07-23 06:33:54.827621] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.479 06:33:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:43.737 "name": "raid_bdev1", 00:20:43.737 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:43.737 "strip_size_kb": 0, 00:20:43.737 "state": "online", 00:20:43.737 "raid_level": "raid1", 00:20:43.737 "superblock": true, 00:20:43.737 "num_base_bdevs": 2, 00:20:43.737 "num_base_bdevs_discovered": 2, 00:20:43.737 "num_base_bdevs_operational": 2, 00:20:43.737 "base_bdevs_list": [ 00:20:43.737 { 00:20:43.737 "name": "spare", 00:20:43.737 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:43.737 "is_configured": true, 00:20:43.737 "data_offset": 256, 00:20:43.737 "data_size": 7936 00:20:43.737 }, 00:20:43.737 { 00:20:43.737 "name": "BaseBdev2", 00:20:43.737 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:43.737 "is_configured": true, 00:20:43.737 "data_offset": 256, 00:20:43.737 "data_size": 7936 00:20:43.737 } 00:20:43.737 ] 00:20:43.737 }' 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:43.737 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.738 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.995 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:43.995 "name": "raid_bdev1", 00:20:43.995 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:43.995 "strip_size_kb": 0, 00:20:43.995 "state": "online", 00:20:43.995 "raid_level": "raid1", 00:20:43.996 "superblock": true, 00:20:43.996 "num_base_bdevs": 2, 00:20:43.996 "num_base_bdevs_discovered": 2, 00:20:43.996 "num_base_bdevs_operational": 2, 00:20:43.996 "base_bdevs_list": [ 00:20:43.996 { 00:20:43.996 "name": "spare", 00:20:43.996 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:43.996 "is_configured": true, 00:20:43.996 "data_offset": 256, 00:20:43.996 "data_size": 7936 00:20:43.996 }, 00:20:43.996 { 00:20:43.996 "name": "BaseBdev2", 00:20:43.996 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:43.996 "is_configured": true, 00:20:43.996 "data_offset": 256, 00:20:43.996 "data_size": 7936 00:20:43.996 } 00:20:43.996 ] 00:20:43.996 }' 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.996 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.254 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.254 "name": "raid_bdev1", 00:20:44.254 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:44.254 "strip_size_kb": 0, 00:20:44.254 "state": "online", 00:20:44.254 "raid_level": "raid1", 00:20:44.254 "superblock": true, 00:20:44.254 "num_base_bdevs": 2, 00:20:44.254 "num_base_bdevs_discovered": 2, 00:20:44.254 "num_base_bdevs_operational": 2, 00:20:44.254 "base_bdevs_list": [ 00:20:44.254 { 00:20:44.254 "name": "spare", 00:20:44.254 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:44.254 "is_configured": true, 00:20:44.254 "data_offset": 256, 00:20:44.254 "data_size": 7936 00:20:44.254 }, 00:20:44.254 { 00:20:44.254 "name": "BaseBdev2", 00:20:44.254 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:44.254 "is_configured": true, 00:20:44.254 "data_offset": 256, 00:20:44.254 "data_size": 7936 00:20:44.254 } 00:20:44.254 ] 00:20:44.254 }' 00:20:44.254 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.254 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.511 06:33:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:44.774 [2024-07-23 06:33:57.235721] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:44.774 [2024-07-23 06:33:57.235744] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.774 [2024-07-23 06:33:57.235783] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.774 [2024-07-23 06:33:57.235797] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.774 [2024-07-23 06:33:57.235801] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd5b5fc35680 name raid_bdev1, state offline 00:20:44.774 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.774 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:20:45.044 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:45.044 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:20:45.044 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:20:45.044 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:45.301 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:45.560 [2024-07-23 06:33:57.963741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.560 [2024-07-23 06:33:57.963825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.560 [2024-07-23 06:33:57.963869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b5fc35400 00:20:45.560 [2024-07-23 06:33:57.963878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.560 [2024-07-23 06:33:57.964498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.560 [2024-07-23 06:33:57.964522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.560 [2024-07-23 06:33:57.964544] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:45.560 [2024-07-23 06:33:57.964557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.560 [2024-07-23 06:33:57.964582] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.560 spare 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.560 06:33:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.560 [2024-07-23 06:33:58.064559] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd5b5fc35680 00:20:45.560 [2024-07-23 06:33:58.064579] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:45.560 [2024-07-23 06:33:58.064618] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5b5fc97e20 00:20:45.560 [2024-07-23 06:33:58.064646] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd5b5fc35680 00:20:45.560 [2024-07-23 06:33:58.064650] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd5b5fc35680 00:20:45.560 [2024-07-23 06:33:58.064675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.819 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.819 "name": "raid_bdev1", 00:20:45.819 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:45.819 "strip_size_kb": 0, 00:20:45.819 "state": "online", 00:20:45.819 "raid_level": "raid1", 00:20:45.819 "superblock": true, 00:20:45.819 "num_base_bdevs": 2, 00:20:45.819 "num_base_bdevs_discovered": 2, 00:20:45.819 "num_base_bdevs_operational": 2, 00:20:45.819 "base_bdevs_list": [ 00:20:45.819 { 00:20:45.819 "name": "spare", 00:20:45.819 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:45.819 "is_configured": true, 00:20:45.819 "data_offset": 256, 00:20:45.819 "data_size": 7936 00:20:45.819 }, 00:20:45.819 { 00:20:45.819 "name": "BaseBdev2", 00:20:45.819 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:45.819 "is_configured": true, 00:20:45.819 "data_offset": 256, 00:20:45.819 "data_size": 7936 00:20:45.819 } 00:20:45.819 ] 00:20:45.819 }' 00:20:45.819 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.819 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.078 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:46.337 "name": "raid_bdev1", 00:20:46.337 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:46.337 "strip_size_kb": 0, 00:20:46.337 "state": "online", 00:20:46.337 "raid_level": "raid1", 00:20:46.337 "superblock": true, 00:20:46.337 "num_base_bdevs": 2, 00:20:46.337 "num_base_bdevs_discovered": 2, 00:20:46.337 "num_base_bdevs_operational": 2, 00:20:46.337 "base_bdevs_list": [ 00:20:46.337 { 00:20:46.337 "name": "spare", 00:20:46.337 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:46.337 "is_configured": true, 00:20:46.337 "data_offset": 256, 00:20:46.337 "data_size": 7936 00:20:46.337 }, 00:20:46.337 { 00:20:46.337 "name": "BaseBdev2", 00:20:46.337 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:46.337 "is_configured": true, 00:20:46.337 "data_offset": 256, 00:20:46.337 "data_size": 7936 00:20:46.337 } 00:20:46.337 ] 00:20:46.337 }' 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.337 06:33:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:46.596 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.596 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:46.855 [2024-07-23 06:33:59.323824] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.855 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.113 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.113 "name": "raid_bdev1", 00:20:47.113 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:47.113 "strip_size_kb": 0, 00:20:47.113 "state": "online", 00:20:47.113 "raid_level": "raid1", 00:20:47.113 "superblock": true, 00:20:47.113 "num_base_bdevs": 2, 00:20:47.113 "num_base_bdevs_discovered": 1, 00:20:47.113 "num_base_bdevs_operational": 1, 00:20:47.113 "base_bdevs_list": [ 00:20:47.113 { 00:20:47.113 "name": null, 00:20:47.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.113 "is_configured": false, 00:20:47.113 "data_offset": 256, 00:20:47.113 "data_size": 7936 00:20:47.113 }, 00:20:47.113 { 00:20:47.113 "name": "BaseBdev2", 00:20:47.113 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:47.113 "is_configured": true, 00:20:47.114 "data_offset": 256, 00:20:47.114 "data_size": 7936 00:20:47.114 } 00:20:47.114 ] 00:20:47.114 }' 00:20:47.114 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.114 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.371 06:33:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:47.629 [2024-07-23 06:34:00.139832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.629 [2024-07-23 06:34:00.139918] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:47.629 [2024-07-23 06:34:00.139924] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:47.629 [2024-07-23 06:34:00.139959] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.629 [2024-07-23 06:34:00.140153] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5b5fc97ec0 00:20:47.629 [2024-07-23 06:34:00.140689] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.886 06:34:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.818 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.075 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:49.075 "name": "raid_bdev1", 00:20:49.075 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:49.075 "strip_size_kb": 0, 00:20:49.075 "state": "online", 00:20:49.075 "raid_level": "raid1", 00:20:49.075 "superblock": true, 00:20:49.075 "num_base_bdevs": 2, 00:20:49.075 "num_base_bdevs_discovered": 2, 00:20:49.075 "num_base_bdevs_operational": 2, 00:20:49.075 "process": { 00:20:49.075 "type": "rebuild", 00:20:49.075 "target": "spare", 00:20:49.075 "progress": { 00:20:49.075 "blocks": 3072, 00:20:49.075 "percent": 38 00:20:49.075 } 00:20:49.075 }, 00:20:49.075 "base_bdevs_list": [ 00:20:49.075 { 00:20:49.075 "name": "spare", 00:20:49.075 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:49.075 "is_configured": true, 00:20:49.075 "data_offset": 256, 00:20:49.075 "data_size": 7936 00:20:49.075 }, 00:20:49.075 { 00:20:49.075 "name": "BaseBdev2", 00:20:49.075 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:49.075 "is_configured": true, 00:20:49.075 "data_offset": 256, 00:20:49.075 "data_size": 7936 00:20:49.075 } 00:20:49.075 ] 00:20:49.075 }' 00:20:49.075 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:49.075 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.075 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:49.075 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.075 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:49.333 [2024-07-23 06:34:01.676101] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.333 [2024-07-23 06:34:01.748036] bdev_raid.c:2544:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:20:49.333 [2024-07-23 06:34:01.748092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.333 [2024-07-23 06:34:01.748098] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.333 [2024-07-23 06:34:01.748103] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.333 06:34:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.590 06:34:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.590 "name": "raid_bdev1", 00:20:49.590 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:49.590 "strip_size_kb": 0, 00:20:49.590 "state": "online", 00:20:49.590 "raid_level": "raid1", 00:20:49.590 "superblock": true, 00:20:49.590 "num_base_bdevs": 2, 00:20:49.590 "num_base_bdevs_discovered": 1, 00:20:49.590 "num_base_bdevs_operational": 1, 00:20:49.590 "base_bdevs_list": [ 00:20:49.590 { 00:20:49.590 "name": null, 00:20:49.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.590 "is_configured": false, 00:20:49.590 "data_offset": 256, 00:20:49.590 "data_size": 7936 00:20:49.590 }, 00:20:49.590 { 00:20:49.590 "name": "BaseBdev2", 00:20:49.590 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:49.590 "is_configured": true, 00:20:49.590 "data_offset": 256, 00:20:49.590 "data_size": 7936 00:20:49.590 } 00:20:49.590 ] 00:20:49.590 }' 00:20:49.590 06:34:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.590 06:34:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.848 06:34:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:50.106 [2024-07-23 06:34:02.600164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:50.106 [2024-07-23 06:34:02.600227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.106 [2024-07-23 06:34:02.600254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b5fc35400 00:20:50.106 [2024-07-23 06:34:02.600263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.106 [2024-07-23 06:34:02.600326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.106 [2024-07-23 06:34:02.600337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:50.106 [2024-07-23 06:34:02.600357] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:50.106 [2024-07-23 06:34:02.600363] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:50.106 [2024-07-23 06:34:02.600366] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:50.106 [2024-07-23 06:34:02.600378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.106 [2024-07-23 06:34:02.600552] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5b5fc97e20 00:20:50.106 [2024-07-23 06:34:02.601086] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.106 spare 00:20:50.106 06:34:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:20:51.480 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.481 "name": "raid_bdev1", 00:20:51.481 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:51.481 "strip_size_kb": 0, 00:20:51.481 "state": "online", 00:20:51.481 "raid_level": "raid1", 00:20:51.481 "superblock": true, 00:20:51.481 "num_base_bdevs": 2, 00:20:51.481 "num_base_bdevs_discovered": 2, 00:20:51.481 "num_base_bdevs_operational": 2, 00:20:51.481 "process": { 00:20:51.481 "type": "rebuild", 00:20:51.481 "target": "spare", 00:20:51.481 "progress": { 00:20:51.481 "blocks": 3328, 00:20:51.481 "percent": 41 00:20:51.481 } 00:20:51.481 }, 00:20:51.481 "base_bdevs_list": [ 00:20:51.481 { 00:20:51.481 "name": "spare", 00:20:51.481 "uuid": "cfd8371d-af1b-925c-a118-25579d35ebb4", 00:20:51.481 "is_configured": true, 00:20:51.481 "data_offset": 256, 00:20:51.481 "data_size": 7936 00:20:51.481 }, 00:20:51.481 { 00:20:51.481 "name": "BaseBdev2", 00:20:51.481 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:51.481 "is_configured": true, 00:20:51.481 "data_offset": 256, 00:20:51.481 "data_size": 7936 00:20:51.481 } 00:20:51.481 ] 00:20:51.481 }' 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.481 06:34:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:52.048 [2024-07-23 06:34:04.272908] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.048 [2024-07-23 06:34:04.308820] bdev_raid.c:2544:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:20:52.048 [2024-07-23 06:34:04.308883] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.048 [2024-07-23 06:34:04.308905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.048 [2024-07-23 06:34:04.308909] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.048 "name": "raid_bdev1", 00:20:52.048 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:52.048 "strip_size_kb": 0, 00:20:52.048 "state": "online", 00:20:52.048 "raid_level": "raid1", 00:20:52.048 "superblock": true, 00:20:52.048 "num_base_bdevs": 2, 00:20:52.048 "num_base_bdevs_discovered": 1, 00:20:52.048 "num_base_bdevs_operational": 1, 00:20:52.048 "base_bdevs_list": [ 00:20:52.048 { 00:20:52.048 "name": null, 00:20:52.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.048 "is_configured": false, 00:20:52.048 "data_offset": 256, 00:20:52.048 "data_size": 7936 00:20:52.048 }, 00:20:52.048 { 00:20:52.048 "name": "BaseBdev2", 00:20:52.048 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:52.048 "is_configured": true, 00:20:52.048 "data_offset": 256, 00:20:52.048 "data_size": 7936 00:20:52.048 } 00:20:52.048 ] 00:20:52.048 }' 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.048 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.614 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:52.615 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:52.615 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:52.615 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:52.615 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:52.615 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.615 06:34:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.873 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.873 "name": "raid_bdev1", 00:20:52.873 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:52.873 "strip_size_kb": 0, 00:20:52.873 "state": "online", 00:20:52.873 "raid_level": "raid1", 00:20:52.873 "superblock": true, 00:20:52.873 "num_base_bdevs": 2, 00:20:52.873 "num_base_bdevs_discovered": 1, 00:20:52.873 "num_base_bdevs_operational": 1, 00:20:52.873 "base_bdevs_list": [ 00:20:52.873 { 00:20:52.873 "name": null, 00:20:52.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.873 "is_configured": false, 00:20:52.873 "data_offset": 256, 00:20:52.873 "data_size": 7936 00:20:52.873 }, 00:20:52.873 { 00:20:52.873 "name": "BaseBdev2", 00:20:52.873 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:52.873 "is_configured": true, 00:20:52.873 "data_offset": 256, 00:20:52.873 "data_size": 7936 00:20:52.873 } 00:20:52.873 ] 00:20:52.873 }' 00:20:52.873 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:52.873 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:52.873 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:52.873 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:52.873 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:53.131 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:53.388 [2024-07-23 06:34:05.728997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:53.388 [2024-07-23 06:34:05.729058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.388 [2024-07-23 06:34:05.729085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b5fc34780 00:20:53.388 [2024-07-23 06:34:05.729094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.389 [2024-07-23 06:34:05.729152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.389 [2024-07-23 06:34:05.729162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:53.389 [2024-07-23 06:34:05.729181] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:53.389 [2024-07-23 06:34:05.729186] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:53.389 [2024-07-23 06:34:05.729190] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:53.389 BaseBdev1 00:20:53.389 06:34:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.320 06:34:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.578 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.578 "name": "raid_bdev1", 00:20:54.578 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:54.578 "strip_size_kb": 0, 00:20:54.578 "state": "online", 00:20:54.578 "raid_level": "raid1", 00:20:54.578 "superblock": true, 00:20:54.578 "num_base_bdevs": 2, 00:20:54.578 "num_base_bdevs_discovered": 1, 00:20:54.578 "num_base_bdevs_operational": 1, 00:20:54.578 "base_bdevs_list": [ 00:20:54.578 { 00:20:54.578 "name": null, 00:20:54.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.579 "is_configured": false, 00:20:54.579 "data_offset": 256, 00:20:54.579 "data_size": 7936 00:20:54.579 }, 00:20:54.579 { 00:20:54.579 "name": "BaseBdev2", 00:20:54.579 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:54.579 "is_configured": true, 00:20:54.579 "data_offset": 256, 00:20:54.579 "data_size": 7936 00:20:54.579 } 00:20:54.579 ] 00:20:54.579 }' 00:20:54.579 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.579 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.145 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.403 "name": "raid_bdev1", 00:20:55.403 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:55.403 "strip_size_kb": 0, 00:20:55.403 "state": "online", 00:20:55.403 "raid_level": "raid1", 00:20:55.403 "superblock": true, 00:20:55.403 "num_base_bdevs": 2, 00:20:55.403 "num_base_bdevs_discovered": 1, 00:20:55.403 "num_base_bdevs_operational": 1, 00:20:55.403 "base_bdevs_list": [ 00:20:55.403 { 00:20:55.403 "name": null, 00:20:55.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.403 "is_configured": false, 00:20:55.403 "data_offset": 256, 00:20:55.403 "data_size": 7936 00:20:55.403 }, 00:20:55.403 { 00:20:55.403 "name": "BaseBdev2", 00:20:55.403 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:55.403 "is_configured": true, 00:20:55.403 "data_offset": 256, 00:20:55.403 "data_size": 7936 00:20:55.403 } 00:20:55.403 ] 00:20:55.403 }' 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:55.403 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:55.661 [2024-07-23 06:34:07.981234] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.661 [2024-07-23 06:34:07.981310] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:55.661 [2024-07-23 06:34:07.981316] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:55.661 request: 00:20:55.661 { 00:20:55.661 "base_bdev": "BaseBdev1", 00:20:55.661 "raid_bdev": "raid_bdev1", 00:20:55.661 "method": "bdev_raid_add_base_bdev", 00:20:55.661 "req_id": 1 00:20:55.661 } 00:20:55.661 Got JSON-RPC error response 00:20:55.661 response: 00:20:55.661 { 00:20:55.661 "code": -22, 00:20:55.661 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:55.661 } 00:20:55.661 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:20:55.661 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.661 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.661 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.661 06:34:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.595 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.161 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.161 "name": "raid_bdev1", 00:20:57.161 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:57.161 "strip_size_kb": 0, 00:20:57.161 "state": "online", 00:20:57.161 "raid_level": "raid1", 00:20:57.161 "superblock": true, 00:20:57.161 "num_base_bdevs": 2, 00:20:57.161 "num_base_bdevs_discovered": 1, 00:20:57.161 "num_base_bdevs_operational": 1, 00:20:57.161 "base_bdevs_list": [ 00:20:57.161 { 00:20:57.161 "name": null, 00:20:57.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.161 "is_configured": false, 00:20:57.161 "data_offset": 256, 00:20:57.161 "data_size": 7936 00:20:57.161 }, 00:20:57.161 { 00:20:57.161 "name": "BaseBdev2", 00:20:57.161 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:57.161 "is_configured": true, 00:20:57.161 "data_offset": 256, 00:20:57.161 "data_size": 7936 00:20:57.161 } 00:20:57.161 ] 00:20:57.161 }' 00:20:57.161 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.161 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.420 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.678 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.678 "name": "raid_bdev1", 00:20:57.678 "uuid": "83c76687-48bd-11ef-a06c-59ddad71024c", 00:20:57.678 "strip_size_kb": 0, 00:20:57.678 "state": "online", 00:20:57.678 "raid_level": "raid1", 00:20:57.678 "superblock": true, 00:20:57.678 "num_base_bdevs": 2, 00:20:57.678 "num_base_bdevs_discovered": 1, 00:20:57.678 "num_base_bdevs_operational": 1, 00:20:57.678 "base_bdevs_list": [ 00:20:57.678 { 00:20:57.678 "name": null, 00:20:57.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.678 "is_configured": false, 00:20:57.678 "data_offset": 256, 00:20:57.678 "data_size": 7936 00:20:57.678 }, 00:20:57.678 { 00:20:57.678 "name": "BaseBdev2", 00:20:57.678 "uuid": "ed3175d9-13ea-c459-9228-7d28015d93f0", 00:20:57.678 "is_configured": true, 00:20:57.678 "data_offset": 256, 00:20:57.678 "data_size": 7936 00:20:57.678 } 00:20:57.678 ] 00:20:57.678 }' 00:20:57.678 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:57.678 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:57.678 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:57.678 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67613 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67613 ']' 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67613 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67613 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:20:57.679 killing process with pid 67613 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67613' 00:20:57.679 Received shutdown signal, test time was about 60.000000 seconds 00:20:57.679 00:20:57.679 Latency(us) 00:20:57.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.679 =================================================================================================================== 00:20:57.679 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 67613 00:20:57.679 [2024-07-23 06:34:09.987231] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.679 [2024-07-23 06:34:09.987263] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.679 [2024-07-23 06:34:09.987275] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.679 [2024-07-23 06:34:09.987280] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd5b5fc35680 name raid_bdev1, state offline 00:20:57.679 06:34:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 67613 00:20:57.679 [2024-07-23 06:34:10.005340] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.679 06:34:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:20:57.679 00:20:57.679 real 0m26.326s 00:20:57.679 user 0m40.866s 00:20:57.679 sys 0m2.396s 00:20:57.679 06:34:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.679 06:34:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.679 ************************************ 00:20:57.679 END TEST raid_rebuild_test_sb_md_interleaved 00:20:57.679 ************************************ 00:20:57.937 06:34:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:57.937 06:34:10 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:20:57.937 06:34:10 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:20:57.937 06:34:10 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67613 ']' 00:20:57.937 06:34:10 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67613 00:20:57.937 06:34:10 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:20:57.937 00:20:57.937 real 11m56.871s 00:20:57.937 user 20m54.882s 00:20:57.937 sys 1m47.877s 00:20:57.937 06:34:10 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.937 06:34:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.937 ************************************ 00:20:57.937 END TEST bdev_raid 00:20:57.937 ************************************ 00:20:57.937 06:34:10 -- common/autotest_common.sh@1142 -- # return 0 00:20:57.937 06:34:10 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:20:57.937 06:34:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:57.937 06:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.937 06:34:10 -- common/autotest_common.sh@10 -- # set +x 00:20:57.937 ************************************ 00:20:57.937 START TEST bdevperf_config 00:20:57.937 ************************************ 00:20:57.937 06:34:10 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:20:57.937 * Looking for test storage... 00:20:57.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:20:57.937 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:20:57.937 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:20:57.937 06:34:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:20:58.196 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:20:58.196 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:20:58.196 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:20:58.196 06:34:10 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:01.478 06:34:13 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-23 06:34:10.481662] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:01.478 [2024-07-23 06:34:10.481883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:01.478 Using job config with 4 jobs 00:21:01.478 EAL: TSC is not safe to use in SMP mode 00:21:01.478 EAL: TSC is not invariant 00:21:01.478 [2024-07-23 06:34:11.018910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.478 [2024-07-23 06:34:11.109350] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:01.478 [2024-07-23 06:34:11.111620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.478 cpumask for '\''job0'\'' is too big 00:21:01.478 cpumask for '\''job1'\'' is too big 00:21:01.478 cpumask for '\''job2'\'' is too big 00:21:01.478 cpumask for '\''job3'\'' is too big 00:21:01.478 Running I/O for 2 seconds... 00:21:01.478 00:21:01.478 Latency(us) 00:21:01.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.478 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.478 Malloc0 : 2.00 309915.59 302.65 0.00 0.00 825.76 229.94 1660.75 00:21:01.478 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.478 Malloc0 : 2.00 309904.44 302.64 0.00 0.00 825.58 204.80 1452.22 00:21:01.478 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.478 Malloc0 : 2.00 309959.56 302.69 0.00 0.00 825.22 192.70 1541.59 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309942.38 302.68 0.00 0.00 825.08 171.29 1571.38 00:21:01.479 =================================================================================================================== 00:21:01.479 Total : 1239721.97 1210.67 0.00 0.00 825.41 171.29 1660.75' 00:21:01.479 06:34:13 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-23 06:34:10.481662] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:01.479 [2024-07-23 06:34:10.481883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:01.479 Using job config with 4 jobs 00:21:01.479 EAL: TSC is not safe to use in SMP mode 00:21:01.479 EAL: TSC is not invariant 00:21:01.479 [2024-07-23 06:34:11.018910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.479 [2024-07-23 06:34:11.109350] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:01.479 [2024-07-23 06:34:11.111620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.479 cpumask for '\''job0'\'' is too big 00:21:01.479 cpumask for '\''job1'\'' is too big 00:21:01.479 cpumask for '\''job2'\'' is too big 00:21:01.479 cpumask for '\''job3'\'' is too big 00:21:01.479 Running I/O for 2 seconds... 00:21:01.479 00:21:01.479 Latency(us) 00:21:01.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309915.59 302.65 0.00 0.00 825.76 229.94 1660.75 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309904.44 302.64 0.00 0.00 825.58 204.80 1452.22 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309959.56 302.69 0.00 0.00 825.22 192.70 1541.59 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309942.38 302.68 0.00 0.00 825.08 171.29 1571.38 00:21:01.479 =================================================================================================================== 00:21:01.479 Total : 1239721.97 1210.67 0.00 0.00 825.41 171.29 1660.75' 00:21:01.479 06:34:13 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 06:34:10.481662] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:01.479 [2024-07-23 06:34:10.481883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:01.479 Using job config with 4 jobs 00:21:01.479 EAL: TSC is not safe to use in SMP mode 00:21:01.479 EAL: TSC is not invariant 00:21:01.479 [2024-07-23 06:34:11.018910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.479 [2024-07-23 06:34:11.109350] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:01.479 [2024-07-23 06:34:11.111620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.479 cpumask for '\''job0'\'' is too big 00:21:01.479 cpumask for '\''job1'\'' is too big 00:21:01.479 cpumask for '\''job2'\'' is too big 00:21:01.479 cpumask for '\''job3'\'' is too big 00:21:01.479 Running I/O for 2 seconds... 00:21:01.479 00:21:01.479 Latency(us) 00:21:01.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309915.59 302.65 0.00 0.00 825.76 229.94 1660.75 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309904.44 302.64 0.00 0.00 825.58 204.80 1452.22 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309959.56 302.69 0.00 0.00 825.22 192.70 1541.59 00:21:01.479 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:01.479 Malloc0 : 2.00 309942.38 302.68 0.00 0.00 825.08 171.29 1571.38 00:21:01.479 =================================================================================================================== 00:21:01.479 Total : 1239721.97 1210.67 0.00 0.00 825.41 171.29 1660.75' 00:21:01.479 06:34:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:21:01.479 06:34:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:21:01.479 06:34:13 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:21:01.479 06:34:13 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:01.479 [2024-07-23 06:34:13.367689] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:01.479 [2024-07-23 06:34:13.367865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:01.479 EAL: TSC is not safe to use in SMP mode 00:21:01.479 EAL: TSC is not invariant 00:21:01.479 [2024-07-23 06:34:13.898095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.479 [2024-07-23 06:34:13.991425] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:01.479 [2024-07-23 06:34:13.993664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.737 cpumask for 'job0' is too big 00:21:01.737 cpumask for 'job1' is too big 00:21:01.737 cpumask for 'job2' is too big 00:21:01.737 cpumask for 'job3' is too big 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:21:04.269 Running I/O for 2 seconds... 00:21:04.269 00:21:04.269 Latency(us) 00:21:04.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.269 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:04.269 Malloc0 : 2.00 327641.23 319.96 0.00 0.00 781.10 218.76 1727.77 00:21:04.269 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:04.269 Malloc0 : 2.00 327667.80 319.99 0.00 0.00 780.85 200.15 1690.54 00:21:04.269 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:04.269 Malloc0 : 2.00 327648.00 319.97 0.00 0.00 780.68 218.76 1660.75 00:21:04.269 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:21:04.269 Malloc0 : 2.00 327721.37 320.04 0.00 0.00 780.33 93.09 1645.85 00:21:04.269 =================================================================================================================== 00:21:04.269 Total : 1310678.40 1279.96 0.00 0.00 780.74 93.09 1727.77' 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:21:04.269 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:21:04.269 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:21:04.269 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:04.269 06:34:16 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:06.801 06:34:19 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-23 06:34:16.261128] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:06.801 [2024-07-23 06:34:16.261398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:06.801 Using job config with 3 jobs 00:21:06.801 EAL: TSC is not safe to use in SMP mode 00:21:06.801 EAL: TSC is not invariant 00:21:06.801 [2024-07-23 06:34:16.820783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.801 [2024-07-23 06:34:16.911860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:06.801 [2024-07-23 06:34:16.914195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.801 cpumask for '\''job0'\'' is too big 00:21:06.801 cpumask for '\''job1'\'' is too big 00:21:06.801 cpumask for '\''job2'\'' is too big 00:21:06.801 Running I/O for 2 seconds... 00:21:06.801 00:21:06.801 Latency(us) 00:21:06.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419502.25 409.67 0.00 0.00 610.01 231.80 1087.30 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419487.57 409.66 0.00 0.00 609.89 193.63 871.33 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419555.96 409.72 0.00 0.00 609.66 60.51 863.89 00:21:06.801 =================================================================================================================== 00:21:06.801 Total : 1258545.78 1229.05 0.00 0.00 609.86 60.51 1087.30' 00:21:06.801 06:34:19 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-23 06:34:16.261128] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:06.801 [2024-07-23 06:34:16.261398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:06.801 Using job config with 3 jobs 00:21:06.801 EAL: TSC is not safe to use in SMP mode 00:21:06.801 EAL: TSC is not invariant 00:21:06.801 [2024-07-23 06:34:16.820783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.801 [2024-07-23 06:34:16.911860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:06.801 [2024-07-23 06:34:16.914195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.801 cpumask for '\''job0'\'' is too big 00:21:06.801 cpumask for '\''job1'\'' is too big 00:21:06.801 cpumask for '\''job2'\'' is too big 00:21:06.801 Running I/O for 2 seconds... 00:21:06.801 00:21:06.801 Latency(us) 00:21:06.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419502.25 409.67 0.00 0.00 610.01 231.80 1087.30 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419487.57 409.66 0.00 0.00 609.89 193.63 871.33 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419555.96 409.72 0.00 0.00 609.66 60.51 863.89 00:21:06.801 =================================================================================================================== 00:21:06.801 Total : 1258545.78 1229.05 0.00 0.00 609.86 60.51 1087.30' 00:21:06.801 06:34:19 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 06:34:16.261128] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:06.801 [2024-07-23 06:34:16.261398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:06.801 Using job config with 3 jobs 00:21:06.801 EAL: TSC is not safe to use in SMP mode 00:21:06.801 EAL: TSC is not invariant 00:21:06.801 [2024-07-23 06:34:16.820783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.801 [2024-07-23 06:34:16.911860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:06.801 [2024-07-23 06:34:16.914195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.801 cpumask for '\''job0'\'' is too big 00:21:06.801 cpumask for '\''job1'\'' is too big 00:21:06.801 cpumask for '\''job2'\'' is too big 00:21:06.801 Running I/O for 2 seconds... 00:21:06.801 00:21:06.801 Latency(us) 00:21:06.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.801 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.801 Malloc0 : 2.00 419502.25 409.67 0.00 0.00 610.01 231.80 1087.30 00:21:06.802 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.802 Malloc0 : 2.00 419487.57 409.66 0.00 0.00 609.89 193.63 871.33 00:21:06.802 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:21:06.802 Malloc0 : 2.00 419555.96 409.72 0.00 0.00 609.66 60.51 863.89 00:21:06.802 =================================================================================================================== 00:21:06.802 Total : 1258545.78 1229.05 0.00 0.00 609.86 60.51 1087.30' 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:21:06.802 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:21:06.802 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:21:06.802 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:21:06.802 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:21:06.802 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:21:06.802 06:34:19 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:10.090 06:34:22 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-23 06:34:19.179051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:10.090 [2024-07-23 06:34:19.179233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:10.090 Using job config with 4 jobs 00:21:10.090 EAL: TSC is not safe to use in SMP mode 00:21:10.090 EAL: TSC is not invariant 00:21:10.090 [2024-07-23 06:34:19.718468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.090 [2024-07-23 06:34:19.803885] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:10.090 [2024-07-23 06:34:19.806169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.090 cpumask for '\''job0'\'' is too big 00:21:10.090 cpumask for '\''job1'\'' is too big 00:21:10.090 cpumask for '\''job2'\'' is too big 00:21:10.090 cpumask for '\''job3'\'' is too big 00:21:10.090 Running I/O for 2 seconds... 00:21:10.090 00:21:10.090 Latency(us) 00:21:10.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152740.13 149.16 0.00 0.00 1675.65 463.59 2964.02 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152732.43 149.15 0.00 0.00 1675.54 411.46 2964.02 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152724.04 149.14 0.00 0.00 1675.14 446.84 2502.29 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152715.50 149.14 0.00 0.00 1675.01 379.81 2517.18 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152707.35 149.13 0.00 0.00 1674.51 463.59 2025.66 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152699.19 149.12 0.00 0.00 1674.42 418.91 1995.87 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152787.92 149.21 0.00 0.00 1672.93 264.38 1966.09 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152779.39 149.20 0.00 0.00 1672.82 179.67 1966.09 00:21:10.091 =================================================================================================================== 00:21:10.091 Total : 1221885.95 1193.25 0.00 0.00 1674.50 179.67 2964.02' 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-23 06:34:19.179051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:10.091 [2024-07-23 06:34:19.179233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:10.091 Using job config with 4 jobs 00:21:10.091 EAL: TSC is not safe to use in SMP mode 00:21:10.091 EAL: TSC is not invariant 00:21:10.091 [2024-07-23 06:34:19.718468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.091 [2024-07-23 06:34:19.803885] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:10.091 [2024-07-23 06:34:19.806169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.091 cpumask for '\''job0'\'' is too big 00:21:10.091 cpumask for '\''job1'\'' is too big 00:21:10.091 cpumask for '\''job2'\'' is too big 00:21:10.091 cpumask for '\''job3'\'' is too big 00:21:10.091 Running I/O for 2 seconds... 00:21:10.091 00:21:10.091 Latency(us) 00:21:10.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152740.13 149.16 0.00 0.00 1675.65 463.59 2964.02 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152732.43 149.15 0.00 0.00 1675.54 411.46 2964.02 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152724.04 149.14 0.00 0.00 1675.14 446.84 2502.29 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152715.50 149.14 0.00 0.00 1675.01 379.81 2517.18 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152707.35 149.13 0.00 0.00 1674.51 463.59 2025.66 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152699.19 149.12 0.00 0.00 1674.42 418.91 1995.87 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152787.92 149.21 0.00 0.00 1672.93 264.38 1966.09 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152779.39 149.20 0.00 0.00 1672.82 179.67 1966.09 00:21:10.091 =================================================================================================================== 00:21:10.091 Total : 1221885.95 1193.25 0.00 0.00 1674.50 179.67 2964.02' 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 06:34:19.179051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:10.091 [2024-07-23 06:34:19.179233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:10.091 Using job config with 4 jobs 00:21:10.091 EAL: TSC is not safe to use in SMP mode 00:21:10.091 EAL: TSC is not invariant 00:21:10.091 [2024-07-23 06:34:19.718468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.091 [2024-07-23 06:34:19.803885] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:10.091 [2024-07-23 06:34:19.806169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.091 cpumask for '\''job0'\'' is too big 00:21:10.091 cpumask for '\''job1'\'' is too big 00:21:10.091 cpumask for '\''job2'\'' is too big 00:21:10.091 cpumask for '\''job3'\'' is too big 00:21:10.091 Running I/O for 2 seconds... 00:21:10.091 00:21:10.091 Latency(us) 00:21:10.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152740.13 149.16 0.00 0.00 1675.65 463.59 2964.02 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152732.43 149.15 0.00 0.00 1675.54 411.46 2964.02 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152724.04 149.14 0.00 0.00 1675.14 446.84 2502.29 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152715.50 149.14 0.00 0.00 1675.01 379.81 2517.18 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152707.35 149.13 0.00 0.00 1674.51 463.59 2025.66 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152699.19 149.12 0.00 0.00 1674.42 418.91 1995.87 00:21:10.091 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc0 : 2.00 152787.92 149.21 0.00 0.00 1672.93 264.38 1966.09 00:21:10.091 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:21:10.091 Malloc1 : 2.00 152779.39 149.20 0.00 0.00 1672.82 179.67 1966.09 00:21:10.091 =================================================================================================================== 00:21:10.091 Total : 1221885.95 1193.25 0.00 0.00 1674.50 179.67 2964.02' 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:21:10.091 06:34:22 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:10.091 00:21:10.091 real 0m11.777s 00:21:10.091 user 0m9.366s 00:21:10.091 sys 0m2.387s 00:21:10.091 06:34:22 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.091 06:34:22 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:21:10.091 ************************************ 00:21:10.091 END TEST bdevperf_config 00:21:10.091 ************************************ 00:21:10.091 06:34:22 -- common/autotest_common.sh@1142 -- # return 0 00:21:10.091 06:34:22 -- spdk/autotest.sh@192 -- # uname -s 00:21:10.091 06:34:22 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:21:10.091 06:34:22 -- spdk/autotest.sh@198 -- # uname -s 00:21:10.091 06:34:22 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:21:10.091 06:34:22 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:21:10.091 06:34:22 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:21:10.091 06:34:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.091 06:34:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.091 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:21:10.091 ************************************ 00:21:10.091 START TEST blockdev_nvme 00:21:10.091 ************************************ 00:21:10.091 06:34:22 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:21:10.091 * Looking for test storage... 00:21:10.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:10.091 06:34:22 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:10.091 06:34:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' FreeBSD = Linux ']' 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@678 -- # PRE_RESERVED_MEM=2048 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68353 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 68353 00:21:10.092 06:34:22 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:10.092 06:34:22 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 68353 ']' 00:21:10.092 06:34:22 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.092 06:34:22 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.092 06:34:22 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.092 06:34:22 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.092 06:34:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 [2024-07-23 06:34:22.275973] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:10.092 [2024-07-23 06:34:22.276224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:10.351 EAL: TSC is not safe to use in SMP mode 00:21:10.351 EAL: TSC is not invariant 00:21:10.351 [2024-07-23 06:34:22.828358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.610 [2024-07-23 06:34:22.912587] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:10.610 [2024-07-23 06:34:22.914784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.868 06:34:23 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.868 06:34:23 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:21:10.868 06:34:23 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:10.868 06:34:23 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:21:10.868 06:34:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:21:10.868 06:34:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:21:10.868 06:34:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 [2024-07-23 06:34:23.434500] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "999adeb7-48bd-11ef-a06c-59ddad71024c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "999adeb7-48bd-11ef-a06c-59ddad71024c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:11.126 06:34:23 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 68353 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 68353 ']' 00:21:11.126 06:34:23 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 68353 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 68353 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:21:11.127 killing process with pid 68353 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68353' 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 68353 00:21:11.127 06:34:23 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 68353 00:21:11.385 06:34:23 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:11.385 06:34:23 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:11.385 06:34:23 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:11.385 06:34:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.385 06:34:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.385 ************************************ 00:21:11.385 START TEST bdev_hello_world 00:21:11.385 ************************************ 00:21:11.385 06:34:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:11.385 [2024-07-23 06:34:23.878932] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:11.385 [2024-07-23 06:34:23.879191] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:11.953 EAL: TSC is not safe to use in SMP mode 00:21:11.953 EAL: TSC is not invariant 00:21:11.953 [2024-07-23 06:34:24.432325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.211 [2024-07-23 06:34:24.516387] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:12.211 [2024-07-23 06:34:24.518557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.211 [2024-07-23 06:34:24.580173] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:12.211 [2024-07-23 06:34:24.653539] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:12.211 [2024-07-23 06:34:24.653623] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:21:12.211 [2024-07-23 06:34:24.653659] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:12.211 [2024-07-23 06:34:24.654612] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:12.211 [2024-07-23 06:34:24.655076] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:12.211 [2024-07-23 06:34:24.655117] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:12.212 [2024-07-23 06:34:24.655276] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:12.212 00:21:12.212 [2024-07-23 06:34:24.655302] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:12.477 00:21:12.477 real 0m0.973s 00:21:12.477 user 0m0.361s 00:21:12.477 sys 0m0.610s 00:21:12.477 06:34:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:12.477 ************************************ 00:21:12.477 END TEST bdev_hello_world 00:21:12.477 ************************************ 00:21:12.477 06:34:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:12.477 06:34:24 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:12.477 06:34:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:12.477 06:34:24 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:12.477 06:34:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:12.477 06:34:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:12.477 ************************************ 00:21:12.477 START TEST bdev_bounds 00:21:12.477 ************************************ 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=68424 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:12.477 Process bdevio pid: 68424 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 68424' 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 68424 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68424 ']' 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.477 06:34:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:12.477 [2024-07-23 06:34:24.903467] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:12.477 [2024-07-23 06:34:24.903686] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:13.052 EAL: TSC is not safe to use in SMP mode 00:21:13.052 EAL: TSC is not invariant 00:21:13.052 [2024-07-23 06:34:25.452730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:13.052 [2024-07-23 06:34:25.549584] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:13.052 [2024-07-23 06:34:25.549645] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:21:13.052 [2024-07-23 06:34:25.549667] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:21:13.052 [2024-07-23 06:34:25.553607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.052 [2024-07-23 06:34:25.553762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.052 [2024-07-23 06:34:25.553755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.310 [2024-07-23 06:34:25.613826] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:13.568 06:34:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.568 06:34:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:21:13.568 06:34:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:13.827 I/O targets: 00:21:13.827 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:21:13.827 00:21:13.827 00:21:13.827 CUnit - A unit testing framework for C - Version 2.1-3 00:21:13.827 http://cunit.sourceforge.net/ 00:21:13.827 00:21:13.827 00:21:13.827 Suite: bdevio tests on: Nvme0n1 00:21:13.827 Test: blockdev write read block ...passed 00:21:13.827 Test: blockdev write zeroes read block ...passed 00:21:13.827 Test: blockdev write zeroes read no split ...passed 00:21:13.827 Test: blockdev write zeroes read split ...passed 00:21:13.827 Test: blockdev write zeroes read split partial ...passed 00:21:13.827 Test: blockdev reset ...[2024-07-23 06:34:26.104179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:21:13.827 [2024-07-23 06:34:26.105524] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:13.827 passed 00:21:13.827 Test: blockdev write read 8 blocks ...passed 00:21:13.827 Test: blockdev write read size > 128k ...passed 00:21:13.827 Test: blockdev write read invalid size ...passed 00:21:13.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:13.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:13.827 Test: blockdev write read max offset ...passed 00:21:13.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:13.827 Test: blockdev writev readv 8 blocks ...passed 00:21:13.827 Test: blockdev writev readv 30 x 1block ...passed 00:21:13.827 Test: blockdev writev readv block ...passed 00:21:13.827 Test: blockdev writev readv size > 128k ...passed 00:21:13.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:13.827 Test: blockdev comparev and writev ...[2024-07-23 06:34:26.110182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x147945000 len:0x1000 00:21:13.827 [2024-07-23 06:34:26.110224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:13.827 passed 00:21:13.827 Test: blockdev nvme passthru rw ...passed 00:21:13.827 Test: blockdev nvme passthru vendor specific ...[2024-07-23 06:34:26.110968] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:13.828 [2024-07-23 06:34:26.111009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:13.828 passed 00:21:13.828 Test: blockdev nvme admin passthru ...passed 00:21:13.828 Test: blockdev copy ...passed 00:21:13.828 00:21:13.828 Run Summary: Type Total Ran Passed Failed Inactive 00:21:13.828 suites 1 1 n/a 0 0 00:21:13.828 tests 23 23 23 0 0 00:21:13.828 asserts 152 152 152 0 n/a 00:21:13.828 00:21:13.828 Elapsed time = 0.031 seconds 00:21:13.828 0 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 68424 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68424 ']' 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68424 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 68424 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:21:13.828 killing process with pid 68424 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68424' 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68424 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68424 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:13.828 00:21:13.828 real 0m1.434s 00:21:13.828 user 0m2.749s 00:21:13.828 sys 0m0.650s 00:21:13.828 ************************************ 00:21:13.828 END TEST bdev_bounds 00:21:13.828 ************************************ 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.828 06:34:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:14.086 06:34:26 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:14.086 ************************************ 00:21:14.086 START TEST bdev_nbd 00:21:14.086 ************************************ 00:21:14.086 06:34:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:21:14.086 06:34:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:14.086 06:34:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ FreeBSD == Linux ]] 00:21:14.086 06:34:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # return 0 00:21:14.086 00:21:14.086 real 0m0.005s 00:21:14.086 user 0m0.003s 00:21:14.086 sys 0m0.001s 00:21:14.086 06:34:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.086 ************************************ 00:21:14.086 END TEST bdev_nbd 00:21:14.086 ************************************ 00:21:14.086 06:34:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:14.086 06:34:26 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:14.086 06:34:26 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:21:14.086 skipping fio tests on NVMe due to multi-ns failures. 00:21:14.086 06:34:26 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:21:14.086 06:34:26 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:14.086 06:34:26 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.086 06:34:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:14.086 ************************************ 00:21:14.086 START TEST bdev_verify 00:21:14.086 ************************************ 00:21:14.086 06:34:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:14.087 [2024-07-23 06:34:26.442039] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:14.087 [2024-07-23 06:34:26.442288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:14.653 EAL: TSC is not safe to use in SMP mode 00:21:14.653 EAL: TSC is not invariant 00:21:14.653 [2024-07-23 06:34:27.001882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:14.653 [2024-07-23 06:34:27.120626] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:14.653 [2024-07-23 06:34:27.120742] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:21:14.653 [2024-07-23 06:34:27.124992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.653 [2024-07-23 06:34:27.124981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.916 [2024-07-23 06:34:27.186279] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:14.916 Running I/O for 5 seconds... 00:21:20.191 00:21:20.191 Latency(us) 00:21:20.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:20.191 Verification LBA range: start 0x0 length 0xa0000 00:21:20.191 Nvme0n1 : 5.00 20441.03 79.85 0.00 0.00 6252.55 592.06 11856.09 00:21:20.191 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:20.191 Verification LBA range: start 0xa0000 length 0xa0000 00:21:20.191 Nvme0n1 : 5.00 21984.87 85.88 0.00 0.00 5813.74 374.23 10247.47 00:21:20.191 =================================================================================================================== 00:21:20.191 Total : 42425.91 165.73 0.00 0.00 6025.15 374.23 11856.09 00:21:20.450 00:21:20.450 real 0m6.426s 00:21:20.450 user 0m11.410s 00:21:20.450 sys 0m0.608s 00:21:20.450 ************************************ 00:21:20.450 END TEST bdev_verify 00:21:20.450 ************************************ 00:21:20.450 06:34:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.450 06:34:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:20.450 06:34:32 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:20.450 06:34:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:20.451 06:34:32 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:21:20.451 06:34:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.451 06:34:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:20.451 ************************************ 00:21:20.451 START TEST bdev_verify_big_io 00:21:20.451 ************************************ 00:21:20.451 06:34:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:20.451 [2024-07-23 06:34:32.916185] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:20.451 [2024-07-23 06:34:32.916424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:21.043 EAL: TSC is not safe to use in SMP mode 00:21:21.043 EAL: TSC is not invariant 00:21:21.043 [2024-07-23 06:34:33.468561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:21.043 [2024-07-23 06:34:33.554077] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:21.043 [2024-07-23 06:34:33.554138] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:21:21.043 [2024-07-23 06:34:33.557017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.043 [2024-07-23 06:34:33.557006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.300 [2024-07-23 06:34:33.614923] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:21.300 Running I/O for 5 seconds... 00:21:26.555 00:21:26.555 Latency(us) 00:21:26.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:26.555 Verification LBA range: start 0x0 length 0xa000 00:21:26.555 Nvme0n1 : 5.01 8583.96 536.50 0.00 0.00 14832.18 377.95 28835.91 00:21:26.555 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:26.555 Verification LBA range: start 0xa000 length 0xa000 00:21:26.555 Nvme0n1 : 5.01 8619.04 538.69 0.00 0.00 14769.11 644.19 23831.33 00:21:26.555 =================================================================================================================== 00:21:26.555 Total : 17203.00 1075.19 0.00 0.00 14800.58 377.95 28835.91 00:21:29.842 00:21:29.842 real 0m9.166s 00:21:29.842 user 0m16.960s 00:21:29.842 sys 0m0.587s 00:21:29.842 06:34:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.842 06:34:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.842 ************************************ 00:21:29.842 END TEST bdev_verify_big_io 00:21:29.842 ************************************ 00:21:29.842 06:34:42 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:29.842 06:34:42 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:29.842 06:34:42 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:21:29.842 06:34:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.842 06:34:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:29.842 ************************************ 00:21:29.842 START TEST bdev_write_zeroes 00:21:29.842 ************************************ 00:21:29.842 06:34:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:29.842 [2024-07-23 06:34:42.127908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:29.842 [2024-07-23 06:34:42.128087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:30.408 EAL: TSC is not safe to use in SMP mode 00:21:30.408 EAL: TSC is not invariant 00:21:30.408 [2024-07-23 06:34:42.650229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.408 [2024-07-23 06:34:42.737639] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:30.408 [2024-07-23 06:34:42.739891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.408 [2024-07-23 06:34:42.797823] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:30.408 Running I/O for 1 seconds... 00:21:31.778 00:21:31.778 Latency(us) 00:21:31.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.779 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:31.779 Nvme0n1 : 1.00 72341.19 282.58 0.00 0.00 1767.82 539.93 16920.25 00:21:31.779 =================================================================================================================== 00:21:31.779 Total : 72341.19 282.58 0.00 0.00 1767.82 539.93 16920.25 00:21:31.779 00:21:31.779 real 0m1.945s 00:21:31.779 user 0m1.371s 00:21:31.779 sys 0m0.567s 00:21:31.779 06:34:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.779 06:34:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:31.779 ************************************ 00:21:31.779 END TEST bdev_write_zeroes 00:21:31.779 ************************************ 00:21:31.779 06:34:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:31.779 06:34:44 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.779 06:34:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:21:31.779 06:34:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.779 06:34:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:31.779 ************************************ 00:21:31.779 START TEST bdev_json_nonenclosed 00:21:31.779 ************************************ 00:21:31.779 06:34:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.779 [2024-07-23 06:34:44.113712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:31.779 [2024-07-23 06:34:44.113974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:32.346 EAL: TSC is not safe to use in SMP mode 00:21:32.346 EAL: TSC is not invariant 00:21:32.346 [2024-07-23 06:34:44.642404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.346 [2024-07-23 06:34:44.730812] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:32.346 [2024-07-23 06:34:44.733145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.346 [2024-07-23 06:34:44.733216] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:32.346 [2024-07-23 06:34:44.733227] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:32.346 [2024-07-23 06:34:44.733235] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:32.346 00:21:32.346 real 0m0.741s 00:21:32.346 user 0m0.179s 00:21:32.346 sys 0m0.559s 00:21:32.346 06:34:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:21:32.346 06:34:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.346 06:34:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:32.346 ************************************ 00:21:32.346 END TEST bdev_json_nonenclosed 00:21:32.346 ************************************ 00:21:32.605 06:34:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:21:32.605 06:34:44 blockdev_nvme -- bdev/blockdev.sh@781 -- # true 00:21:32.605 06:34:44 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:32.605 06:34:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:21:32.605 06:34:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:32.605 06:34:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:32.605 ************************************ 00:21:32.605 START TEST bdev_json_nonarray 00:21:32.605 ************************************ 00:21:32.605 06:34:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:32.605 [2024-07-23 06:34:44.903248] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:32.605 [2024-07-23 06:34:44.903546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:33.172 EAL: TSC is not safe to use in SMP mode 00:21:33.172 EAL: TSC is not invariant 00:21:33.172 [2024-07-23 06:34:45.426968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.172 [2024-07-23 06:34:45.516991] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:33.172 [2024-07-23 06:34:45.519333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.172 [2024-07-23 06:34:45.519419] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:33.172 [2024-07-23 06:34:45.519434] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:33.172 [2024-07-23 06:34:45.519442] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:33.172 00:21:33.172 real 0m0.741s 00:21:33.172 user 0m0.179s 00:21:33.172 sys 0m0.559s 00:21:33.172 06:34:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:21:33.173 06:34:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.173 06:34:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:33.173 ************************************ 00:21:33.173 END TEST bdev_json_nonarray 00:21:33.173 ************************************ 00:21:33.173 06:34:45 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@784 -- # true 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:21:33.173 06:34:45 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:21:33.173 00:21:33.173 real 0m23.568s 00:21:33.173 user 0m35.013s 00:21:33.173 sys 0m5.130s 00:21:33.173 06:34:45 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.173 06:34:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:33.173 ************************************ 00:21:33.173 END TEST blockdev_nvme 00:21:33.173 ************************************ 00:21:33.431 06:34:45 -- common/autotest_common.sh@1142 -- # return 0 00:21:33.431 06:34:45 -- spdk/autotest.sh@213 -- # uname -s 00:21:33.431 06:34:45 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:21:33.431 06:34:45 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:21:33.431 06:34:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:33.431 06:34:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.431 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:33.431 ************************************ 00:21:33.431 START TEST nvme 00:21:33.431 ************************************ 00:21:33.431 06:34:45 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:21:33.431 * Looking for test storage... 00:21:33.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:33.431 06:34:45 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:33.691 hw.nic_uio.bdfs="0:16:0" 00:21:33.691 06:34:46 nvme -- nvme/nvme.sh@79 -- # uname 00:21:33.691 06:34:46 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:21:33.691 06:34:46 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:33.691 06:34:46 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:21:33.691 06:34:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.691 06:34:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:33.691 ************************************ 00:21:33.691 START TEST nvme_reset 00:21:33.691 ************************************ 00:21:33.691 06:34:46 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:34.258 EAL: TSC is not safe to use in SMP mode 00:21:34.258 EAL: TSC is not invariant 00:21:34.258 [2024-07-23 06:34:46.651059] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:34.258 Initializing NVMe Controllers 00:21:34.258 Skipping QEMU NVMe SSD at 0000:00:10.0 00:21:34.258 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:21:34.258 00:21:34.258 real 0m0.610s 00:21:34.258 user 0m0.006s 00:21:34.258 sys 0m0.604s 00:21:34.258 06:34:46 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.258 06:34:46 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:21:34.258 ************************************ 00:21:34.258 END TEST nvme_reset 00:21:34.258 ************************************ 00:21:34.258 06:34:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:34.258 06:34:46 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:21:34.258 06:34:46 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:34.258 06:34:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.258 06:34:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:34.258 ************************************ 00:21:34.258 START TEST nvme_identify 00:21:34.258 ************************************ 00:21:34.258 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:21:34.258 06:34:46 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:21:34.258 06:34:46 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:21:34.258 06:34:46 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:21:34.258 06:34:46 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:21:34.258 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:34.258 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:21:34.258 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:34.258 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:34.258 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:34.517 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:21:34.517 06:34:46 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:21:34.517 06:34:46 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:21:35.111 EAL: TSC is not safe to use in SMP mode 00:21:35.111 EAL: TSC is not invariant 00:21:35.111 [2024-07-23 06:34:47.374841] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:35.111 ===================================================== 00:21:35.111 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:35.111 ===================================================== 00:21:35.111 Controller Capabilities/Features 00:21:35.111 ================================ 00:21:35.111 Vendor ID: 1b36 00:21:35.111 Subsystem Vendor ID: 1af4 00:21:35.111 Serial Number: 12340 00:21:35.111 Model Number: QEMU NVMe Ctrl 00:21:35.111 Firmware Version: 8.0.0 00:21:35.111 Recommended Arb Burst: 6 00:21:35.111 IEEE OUI Identifier: 00 54 52 00:21:35.111 Multi-path I/O 00:21:35.111 May have multiple subsystem ports: No 00:21:35.111 May have multiple controllers: No 00:21:35.111 Associated with SR-IOV VF: No 00:21:35.111 Max Data Transfer Size: 524288 00:21:35.111 Max Number of Namespaces: 256 00:21:35.111 Max Number of I/O Queues: 64 00:21:35.111 NVMe Specification Version (VS): 1.4 00:21:35.111 NVMe Specification Version (Identify): 1.4 00:21:35.111 Maximum Queue Entries: 2048 00:21:35.111 Contiguous Queues Required: Yes 00:21:35.111 Arbitration Mechanisms Supported 00:21:35.111 Weighted Round Robin: Not Supported 00:21:35.111 Vendor Specific: Not Supported 00:21:35.111 Reset Timeout: 7500 ms 00:21:35.111 Doorbell Stride: 4 bytes 00:21:35.111 NVM Subsystem Reset: Not Supported 00:21:35.111 Command Sets Supported 00:21:35.111 NVM Command Set: Supported 00:21:35.111 Boot Partition: Not Supported 00:21:35.111 Memory Page Size Minimum: 4096 bytes 00:21:35.111 Memory Page Size Maximum: 65536 bytes 00:21:35.111 Persistent Memory Region: Not Supported 00:21:35.111 Optional Asynchronous Events Supported 00:21:35.111 Namespace Attribute Notices: Supported 00:21:35.111 Firmware Activation Notices: Not Supported 00:21:35.111 ANA Change Notices: Not Supported 00:21:35.111 PLE Aggregate Log Change Notices: Not Supported 00:21:35.111 LBA Status Info Alert Notices: Not Supported 00:21:35.111 EGE Aggregate Log Change Notices: Not Supported 00:21:35.111 Normal NVM Subsystem Shutdown event: Not Supported 00:21:35.111 Zone Descriptor Change Notices: Not Supported 00:21:35.111 Discovery Log Change Notices: Not Supported 00:21:35.111 Controller Attributes 00:21:35.111 128-bit Host Identifier: Not Supported 00:21:35.111 Non-Operational Permissive Mode: Not Supported 00:21:35.111 NVM Sets: Not Supported 00:21:35.111 Read Recovery Levels: Not Supported 00:21:35.111 Endurance Groups: Not Supported 00:21:35.111 Predictable Latency Mode: Not Supported 00:21:35.111 Traffic Based Keep ALive: Not Supported 00:21:35.111 Namespace Granularity: Not Supported 00:21:35.111 SQ Associations: Not Supported 00:21:35.111 UUID List: Not Supported 00:21:35.111 Multi-Domain Subsystem: Not Supported 00:21:35.111 Fixed Capacity Management: Not Supported 00:21:35.111 Variable Capacity Management: Not Supported 00:21:35.111 Delete Endurance Group: Not Supported 00:21:35.111 Delete NVM Set: Not Supported 00:21:35.111 Extended LBA Formats Supported: Supported 00:21:35.111 Flexible Data Placement Supported: Not Supported 00:21:35.111 00:21:35.111 Controller Memory Buffer Support 00:21:35.111 ================================ 00:21:35.111 Supported: No 00:21:35.111 00:21:35.111 Persistent Memory Region Support 00:21:35.111 ================================ 00:21:35.111 Supported: No 00:21:35.111 00:21:35.111 Admin Command Set Attributes 00:21:35.111 ============================ 00:21:35.111 Security Send/Receive: Not Supported 00:21:35.111 Format NVM: Supported 00:21:35.111 Firmware Activate/Download: Not Supported 00:21:35.111 Namespace Management: Supported 00:21:35.111 Device Self-Test: Not Supported 00:21:35.111 Directives: Supported 00:21:35.111 NVMe-MI: Not Supported 00:21:35.111 Virtualization Management: Not Supported 00:21:35.111 Doorbell Buffer Config: Supported 00:21:35.111 Get LBA Status Capability: Not Supported 00:21:35.111 Command & Feature Lockdown Capability: Not Supported 00:21:35.111 Abort Command Limit: 4 00:21:35.111 Async Event Request Limit: 4 00:21:35.111 Number of Firmware Slots: N/A 00:21:35.111 Firmware Slot 1 Read-Only: N/A 00:21:35.111 Firmware Activation Without Reset: N/A 00:21:35.111 Multiple Update Detection Support: N/A 00:21:35.111 Firmware Update Granularity: No Information Provided 00:21:35.111 Per-Namespace SMART Log: Yes 00:21:35.111 Asymmetric Namespace Access Log Page: Not Supported 00:21:35.111 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:35.111 Command Effects Log Page: Supported 00:21:35.111 Get Log Page Extended Data: Supported 00:21:35.111 Telemetry Log Pages: Not Supported 00:21:35.111 Persistent Event Log Pages: Not Supported 00:21:35.111 Supported Log Pages Log Page: May Support 00:21:35.111 Commands Supported & Effects Log Page: Not Supported 00:21:35.111 Feature Identifiers & Effects Log Page:May Support 00:21:35.111 NVMe-MI Commands & Effects Log Page: May Support 00:21:35.111 Data Area 4 for Telemetry Log: Not Supported 00:21:35.111 Error Log Page Entries Supported: 1 00:21:35.111 Keep Alive: Not Supported 00:21:35.111 00:21:35.111 NVM Command Set Attributes 00:21:35.111 ========================== 00:21:35.111 Submission Queue Entry Size 00:21:35.111 Max: 64 00:21:35.111 Min: 64 00:21:35.111 Completion Queue Entry Size 00:21:35.111 Max: 16 00:21:35.111 Min: 16 00:21:35.111 Number of Namespaces: 256 00:21:35.111 Compare Command: Supported 00:21:35.111 Write Uncorrectable Command: Not Supported 00:21:35.111 Dataset Management Command: Supported 00:21:35.111 Write Zeroes Command: Supported 00:21:35.111 Set Features Save Field: Supported 00:21:35.111 Reservations: Not Supported 00:21:35.111 Timestamp: Supported 00:21:35.111 Copy: Supported 00:21:35.111 Volatile Write Cache: Present 00:21:35.111 Atomic Write Unit (Normal): 1 00:21:35.111 Atomic Write Unit (PFail): 1 00:21:35.111 Atomic Compare & Write Unit: 1 00:21:35.111 Fused Compare & Write: Not Supported 00:21:35.111 Scatter-Gather List 00:21:35.111 SGL Command Set: Supported 00:21:35.111 SGL Keyed: Not Supported 00:21:35.111 SGL Bit Bucket Descriptor: Not Supported 00:21:35.111 SGL Metadata Pointer: Not Supported 00:21:35.111 Oversized SGL: Not Supported 00:21:35.111 SGL Metadata Address: Not Supported 00:21:35.111 SGL Offset: Not Supported 00:21:35.111 Transport SGL Data Block: Not Supported 00:21:35.111 Replay Protected Memory Block: Not Supported 00:21:35.111 00:21:35.111 Firmware Slot Information 00:21:35.111 ========================= 00:21:35.111 Active slot: 1 00:21:35.111 Slot 1 Firmware Revision: 1.0 00:21:35.111 00:21:35.111 00:21:35.111 Commands Supported and Effects 00:21:35.111 ============================== 00:21:35.111 Admin Commands 00:21:35.111 -------------- 00:21:35.111 Delete I/O Submission Queue (00h): Supported 00:21:35.111 Create I/O Submission Queue (01h): Supported 00:21:35.111 Get Log Page (02h): Supported 00:21:35.111 Delete I/O Completion Queue (04h): Supported 00:21:35.111 Create I/O Completion Queue (05h): Supported 00:21:35.111 Identify (06h): Supported 00:21:35.111 Abort (08h): Supported 00:21:35.111 Set Features (09h): Supported 00:21:35.111 Get Features (0Ah): Supported 00:21:35.111 Asynchronous Event Request (0Ch): Supported 00:21:35.111 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:35.111 Directive Send (19h): Supported 00:21:35.111 Directive Receive (1Ah): Supported 00:21:35.111 Virtualization Management (1Ch): Supported 00:21:35.111 Doorbell Buffer Config (7Ch): Supported 00:21:35.111 Format NVM (80h): Supported LBA-Change 00:21:35.111 I/O Commands 00:21:35.111 ------------ 00:21:35.111 Flush (00h): Supported LBA-Change 00:21:35.111 Write (01h): Supported LBA-Change 00:21:35.111 Read (02h): Supported 00:21:35.111 Compare (05h): Supported 00:21:35.111 Write Zeroes (08h): Supported LBA-Change 00:21:35.111 Dataset Management (09h): Supported LBA-Change 00:21:35.111 Unknown (0Ch): Supported 00:21:35.111 Unknown (12h): Supported 00:21:35.111 Copy (19h): Supported LBA-Change 00:21:35.111 Unknown (1Dh): Supported LBA-Change 00:21:35.111 00:21:35.111 Error Log 00:21:35.111 ========= 00:21:35.111 00:21:35.111 Arbitration 00:21:35.112 =========== 00:21:35.112 Arbitration Burst: no limit 00:21:35.112 00:21:35.112 Power Management 00:21:35.112 ================ 00:21:35.112 Number of Power States: 1 00:21:35.112 Current Power State: Power State #0 00:21:35.112 Power State #0: 00:21:35.112 Max Power: 25.00 W 00:21:35.112 Non-Operational State: Operational 00:21:35.112 Entry Latency: 16 microseconds 00:21:35.112 Exit Latency: 4 microseconds 00:21:35.112 Relative Read Throughput: 0 00:21:35.112 Relative Read Latency: 0 00:21:35.112 Relative Write Throughput: 0 00:21:35.112 Relative Write Latency: 0 00:21:35.112 Idle Power: Not Reported 00:21:35.112 Active Power: Not Reported 00:21:35.112 Non-Operational Permissive Mode: Not Supported 00:21:35.112 00:21:35.112 Health Information 00:21:35.112 ================== 00:21:35.112 Critical Warnings: 00:21:35.112 Available Spare Space: OK 00:21:35.112 Temperature: OK 00:21:35.112 Device Reliability: OK 00:21:35.112 Read Only: No 00:21:35.112 Volatile Memory Backup: OK 00:21:35.112 Current Temperature: 323 Kelvin (50 Celsius) 00:21:35.112 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:35.112 Available Spare: 0% 00:21:35.112 Available Spare Threshold: 0% 00:21:35.112 Life Percentage Used: 0% 00:21:35.112 Data Units Read: 12747 00:21:35.112 Data Units Written: 12731 00:21:35.112 Host Read Commands: 298654 00:21:35.112 Host Write Commands: 298503 00:21:35.112 Controller Busy Time: 0 minutes 00:21:35.112 Power Cycles: 0 00:21:35.112 Power On Hours: 0 hours 00:21:35.112 Unsafe Shutdowns: 0 00:21:35.112 Unrecoverable Media Errors: 0 00:21:35.112 Lifetime Error Log Entries: 0 00:21:35.112 Warning Temperature Time: 0 minutes 00:21:35.112 Critical Temperature Time: 0 minutes 00:21:35.112 00:21:35.112 Number of Queues 00:21:35.112 ================ 00:21:35.112 Number of I/O Submission Queues: 64 00:21:35.112 Number of I/O Completion Queues: 64 00:21:35.112 00:21:35.112 ZNS Specific Controller Data 00:21:35.112 ============================ 00:21:35.112 Zone Append Size Limit: 0 00:21:35.112 00:21:35.112 00:21:35.112 Active Namespaces 00:21:35.112 ================= 00:21:35.112 Namespace ID:1 00:21:35.112 Error Recovery Timeout: Unlimited 00:21:35.112 Command Set Identifier: NVM (00h) 00:21:35.112 Deallocate: Supported 00:21:35.112 Deallocated/Unwritten Error: Supported 00:21:35.112 Deallocated Read Value: All 0x00 00:21:35.112 Deallocate in Write Zeroes: Not Supported 00:21:35.112 Deallocated Guard Field: 0xFFFF 00:21:35.112 Flush: Supported 00:21:35.112 Reservation: Not Supported 00:21:35.112 Namespace Sharing Capabilities: Private 00:21:35.112 Size (in LBAs): 1310720 (5GiB) 00:21:35.112 Capacity (in LBAs): 1310720 (5GiB) 00:21:35.112 Utilization (in LBAs): 1310720 (5GiB) 00:21:35.112 Thin Provisioning: Not Supported 00:21:35.112 Per-NS Atomic Units: No 00:21:35.112 Maximum Single Source Range Length: 128 00:21:35.112 Maximum Copy Length: 128 00:21:35.112 Maximum Source Range Count: 128 00:21:35.112 NGUID/EUI64 Never Reused: No 00:21:35.112 Namespace Write Protected: No 00:21:35.112 Number of LBA Formats: 8 00:21:35.112 Current LBA Format: LBA Format #04 00:21:35.112 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:35.112 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:35.112 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:35.112 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:35.112 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:35.112 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:35.112 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:35.112 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:35.112 00:21:35.112 NVM Specific Namespace Data 00:21:35.112 =========================== 00:21:35.112 Logical Block Storage Tag Mask: 0 00:21:35.112 Protection Information Capabilities: 00:21:35.112 16b Guard Protection Information Storage Tag Support: No 00:21:35.112 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:35.112 Storage Tag Check Read Support: No 00:21:35.112 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.112 06:34:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:35.112 06:34:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:35.680 EAL: TSC is not safe to use in SMP mode 00:21:35.680 EAL: TSC is not invariant 00:21:35.680 [2024-07-23 06:34:47.983653] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:35.680 ===================================================== 00:21:35.680 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:35.680 ===================================================== 00:21:35.680 Controller Capabilities/Features 00:21:35.680 ================================ 00:21:35.680 Vendor ID: 1b36 00:21:35.680 Subsystem Vendor ID: 1af4 00:21:35.680 Serial Number: 12340 00:21:35.680 Model Number: QEMU NVMe Ctrl 00:21:35.680 Firmware Version: 8.0.0 00:21:35.680 Recommended Arb Burst: 6 00:21:35.680 IEEE OUI Identifier: 00 54 52 00:21:35.680 Multi-path I/O 00:21:35.680 May have multiple subsystem ports: No 00:21:35.680 May have multiple controllers: No 00:21:35.680 Associated with SR-IOV VF: No 00:21:35.680 Max Data Transfer Size: 524288 00:21:35.680 Max Number of Namespaces: 256 00:21:35.680 Max Number of I/O Queues: 64 00:21:35.680 NVMe Specification Version (VS): 1.4 00:21:35.680 NVMe Specification Version (Identify): 1.4 00:21:35.680 Maximum Queue Entries: 2048 00:21:35.680 Contiguous Queues Required: Yes 00:21:35.680 Arbitration Mechanisms Supported 00:21:35.680 Weighted Round Robin: Not Supported 00:21:35.680 Vendor Specific: Not Supported 00:21:35.680 Reset Timeout: 7500 ms 00:21:35.680 Doorbell Stride: 4 bytes 00:21:35.680 NVM Subsystem Reset: Not Supported 00:21:35.680 Command Sets Supported 00:21:35.680 NVM Command Set: Supported 00:21:35.680 Boot Partition: Not Supported 00:21:35.680 Memory Page Size Minimum: 4096 bytes 00:21:35.680 Memory Page Size Maximum: 65536 bytes 00:21:35.680 Persistent Memory Region: Not Supported 00:21:35.680 Optional Asynchronous Events Supported 00:21:35.680 Namespace Attribute Notices: Supported 00:21:35.680 Firmware Activation Notices: Not Supported 00:21:35.680 ANA Change Notices: Not Supported 00:21:35.680 PLE Aggregate Log Change Notices: Not Supported 00:21:35.680 LBA Status Info Alert Notices: Not Supported 00:21:35.680 EGE Aggregate Log Change Notices: Not Supported 00:21:35.680 Normal NVM Subsystem Shutdown event: Not Supported 00:21:35.680 Zone Descriptor Change Notices: Not Supported 00:21:35.680 Discovery Log Change Notices: Not Supported 00:21:35.680 Controller Attributes 00:21:35.680 128-bit Host Identifier: Not Supported 00:21:35.680 Non-Operational Permissive Mode: Not Supported 00:21:35.680 NVM Sets: Not Supported 00:21:35.680 Read Recovery Levels: Not Supported 00:21:35.680 Endurance Groups: Not Supported 00:21:35.680 Predictable Latency Mode: Not Supported 00:21:35.680 Traffic Based Keep ALive: Not Supported 00:21:35.680 Namespace Granularity: Not Supported 00:21:35.680 SQ Associations: Not Supported 00:21:35.680 UUID List: Not Supported 00:21:35.680 Multi-Domain Subsystem: Not Supported 00:21:35.680 Fixed Capacity Management: Not Supported 00:21:35.680 Variable Capacity Management: Not Supported 00:21:35.680 Delete Endurance Group: Not Supported 00:21:35.680 Delete NVM Set: Not Supported 00:21:35.680 Extended LBA Formats Supported: Supported 00:21:35.680 Flexible Data Placement Supported: Not Supported 00:21:35.680 00:21:35.680 Controller Memory Buffer Support 00:21:35.680 ================================ 00:21:35.680 Supported: No 00:21:35.680 00:21:35.680 Persistent Memory Region Support 00:21:35.680 ================================ 00:21:35.680 Supported: No 00:21:35.680 00:21:35.680 Admin Command Set Attributes 00:21:35.680 ============================ 00:21:35.680 Security Send/Receive: Not Supported 00:21:35.680 Format NVM: Supported 00:21:35.680 Firmware Activate/Download: Not Supported 00:21:35.680 Namespace Management: Supported 00:21:35.680 Device Self-Test: Not Supported 00:21:35.680 Directives: Supported 00:21:35.680 NVMe-MI: Not Supported 00:21:35.680 Virtualization Management: Not Supported 00:21:35.680 Doorbell Buffer Config: Supported 00:21:35.680 Get LBA Status Capability: Not Supported 00:21:35.680 Command & Feature Lockdown Capability: Not Supported 00:21:35.680 Abort Command Limit: 4 00:21:35.680 Async Event Request Limit: 4 00:21:35.680 Number of Firmware Slots: N/A 00:21:35.680 Firmware Slot 1 Read-Only: N/A 00:21:35.680 Firmware Activation Without Reset: N/A 00:21:35.680 Multiple Update Detection Support: N/A 00:21:35.680 Firmware Update Granularity: No Information Provided 00:21:35.680 Per-Namespace SMART Log: Yes 00:21:35.680 Asymmetric Namespace Access Log Page: Not Supported 00:21:35.681 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:35.681 Command Effects Log Page: Supported 00:21:35.681 Get Log Page Extended Data: Supported 00:21:35.681 Telemetry Log Pages: Not Supported 00:21:35.681 Persistent Event Log Pages: Not Supported 00:21:35.681 Supported Log Pages Log Page: May Support 00:21:35.681 Commands Supported & Effects Log Page: Not Supported 00:21:35.681 Feature Identifiers & Effects Log Page:May Support 00:21:35.681 NVMe-MI Commands & Effects Log Page: May Support 00:21:35.681 Data Area 4 for Telemetry Log: Not Supported 00:21:35.681 Error Log Page Entries Supported: 1 00:21:35.681 Keep Alive: Not Supported 00:21:35.681 00:21:35.681 NVM Command Set Attributes 00:21:35.681 ========================== 00:21:35.681 Submission Queue Entry Size 00:21:35.681 Max: 64 00:21:35.681 Min: 64 00:21:35.681 Completion Queue Entry Size 00:21:35.681 Max: 16 00:21:35.681 Min: 16 00:21:35.681 Number of Namespaces: 256 00:21:35.681 Compare Command: Supported 00:21:35.681 Write Uncorrectable Command: Not Supported 00:21:35.681 Dataset Management Command: Supported 00:21:35.681 Write Zeroes Command: Supported 00:21:35.681 Set Features Save Field: Supported 00:21:35.681 Reservations: Not Supported 00:21:35.681 Timestamp: Supported 00:21:35.681 Copy: Supported 00:21:35.681 Volatile Write Cache: Present 00:21:35.681 Atomic Write Unit (Normal): 1 00:21:35.681 Atomic Write Unit (PFail): 1 00:21:35.681 Atomic Compare & Write Unit: 1 00:21:35.681 Fused Compare & Write: Not Supported 00:21:35.681 Scatter-Gather List 00:21:35.681 SGL Command Set: Supported 00:21:35.681 SGL Keyed: Not Supported 00:21:35.681 SGL Bit Bucket Descriptor: Not Supported 00:21:35.681 SGL Metadata Pointer: Not Supported 00:21:35.681 Oversized SGL: Not Supported 00:21:35.681 SGL Metadata Address: Not Supported 00:21:35.681 SGL Offset: Not Supported 00:21:35.681 Transport SGL Data Block: Not Supported 00:21:35.681 Replay Protected Memory Block: Not Supported 00:21:35.681 00:21:35.681 Firmware Slot Information 00:21:35.681 ========================= 00:21:35.681 Active slot: 1 00:21:35.681 Slot 1 Firmware Revision: 1.0 00:21:35.681 00:21:35.681 00:21:35.681 Commands Supported and Effects 00:21:35.681 ============================== 00:21:35.681 Admin Commands 00:21:35.681 -------------- 00:21:35.681 Delete I/O Submission Queue (00h): Supported 00:21:35.681 Create I/O Submission Queue (01h): Supported 00:21:35.681 Get Log Page (02h): Supported 00:21:35.681 Delete I/O Completion Queue (04h): Supported 00:21:35.681 Create I/O Completion Queue (05h): Supported 00:21:35.681 Identify (06h): Supported 00:21:35.681 Abort (08h): Supported 00:21:35.681 Set Features (09h): Supported 00:21:35.681 Get Features (0Ah): Supported 00:21:35.681 Asynchronous Event Request (0Ch): Supported 00:21:35.681 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:35.681 Directive Send (19h): Supported 00:21:35.681 Directive Receive (1Ah): Supported 00:21:35.681 Virtualization Management (1Ch): Supported 00:21:35.681 Doorbell Buffer Config (7Ch): Supported 00:21:35.681 Format NVM (80h): Supported LBA-Change 00:21:35.681 I/O Commands 00:21:35.681 ------------ 00:21:35.681 Flush (00h): Supported LBA-Change 00:21:35.681 Write (01h): Supported LBA-Change 00:21:35.681 Read (02h): Supported 00:21:35.681 Compare (05h): Supported 00:21:35.681 Write Zeroes (08h): Supported LBA-Change 00:21:35.681 Dataset Management (09h): Supported LBA-Change 00:21:35.681 Unknown (0Ch): Supported 00:21:35.681 Unknown (12h): Supported 00:21:35.681 Copy (19h): Supported LBA-Change 00:21:35.681 Unknown (1Dh): Supported LBA-Change 00:21:35.681 00:21:35.681 Error Log 00:21:35.681 ========= 00:21:35.681 00:21:35.681 Arbitration 00:21:35.681 =========== 00:21:35.681 Arbitration Burst: no limit 00:21:35.681 00:21:35.681 Power Management 00:21:35.681 ================ 00:21:35.681 Number of Power States: 1 00:21:35.681 Current Power State: Power State #0 00:21:35.681 Power State #0: 00:21:35.681 Max Power: 25.00 W 00:21:35.681 Non-Operational State: Operational 00:21:35.681 Entry Latency: 16 microseconds 00:21:35.681 Exit Latency: 4 microseconds 00:21:35.681 Relative Read Throughput: 0 00:21:35.681 Relative Read Latency: 0 00:21:35.681 Relative Write Throughput: 0 00:21:35.681 Relative Write Latency: 0 00:21:35.681 Idle Power: Not Reported 00:21:35.681 Active Power: Not Reported 00:21:35.681 Non-Operational Permissive Mode: Not Supported 00:21:35.681 00:21:35.681 Health Information 00:21:35.681 ================== 00:21:35.681 Critical Warnings: 00:21:35.681 Available Spare Space: OK 00:21:35.681 Temperature: OK 00:21:35.681 Device Reliability: OK 00:21:35.681 Read Only: No 00:21:35.681 Volatile Memory Backup: OK 00:21:35.681 Current Temperature: 323 Kelvin (50 Celsius) 00:21:35.681 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:35.681 Available Spare: 0% 00:21:35.681 Available Spare Threshold: 0% 00:21:35.681 Life Percentage Used: 0% 00:21:35.681 Data Units Read: 12747 00:21:35.681 Data Units Written: 12731 00:21:35.681 Host Read Commands: 298654 00:21:35.681 Host Write Commands: 298503 00:21:35.681 Controller Busy Time: 0 minutes 00:21:35.681 Power Cycles: 0 00:21:35.681 Power On Hours: 0 hours 00:21:35.681 Unsafe Shutdowns: 0 00:21:35.681 Unrecoverable Media Errors: 0 00:21:35.681 Lifetime Error Log Entries: 0 00:21:35.681 Warning Temperature Time: 0 minutes 00:21:35.681 Critical Temperature Time: 0 minutes 00:21:35.681 00:21:35.681 Number of Queues 00:21:35.681 ================ 00:21:35.681 Number of I/O Submission Queues: 64 00:21:35.681 Number of I/O Completion Queues: 64 00:21:35.681 00:21:35.681 ZNS Specific Controller Data 00:21:35.681 ============================ 00:21:35.681 Zone Append Size Limit: 0 00:21:35.681 00:21:35.681 00:21:35.681 Active Namespaces 00:21:35.681 ================= 00:21:35.681 Namespace ID:1 00:21:35.681 Error Recovery Timeout: Unlimited 00:21:35.681 Command Set Identifier: NVM (00h) 00:21:35.681 Deallocate: Supported 00:21:35.681 Deallocated/Unwritten Error: Supported 00:21:35.681 Deallocated Read Value: All 0x00 00:21:35.681 Deallocate in Write Zeroes: Not Supported 00:21:35.681 Deallocated Guard Field: 0xFFFF 00:21:35.681 Flush: Supported 00:21:35.681 Reservation: Not Supported 00:21:35.681 Namespace Sharing Capabilities: Private 00:21:35.681 Size (in LBAs): 1310720 (5GiB) 00:21:35.681 Capacity (in LBAs): 1310720 (5GiB) 00:21:35.681 Utilization (in LBAs): 1310720 (5GiB) 00:21:35.681 Thin Provisioning: Not Supported 00:21:35.681 Per-NS Atomic Units: No 00:21:35.681 Maximum Single Source Range Length: 128 00:21:35.681 Maximum Copy Length: 128 00:21:35.681 Maximum Source Range Count: 128 00:21:35.681 NGUID/EUI64 Never Reused: No 00:21:35.681 Namespace Write Protected: No 00:21:35.681 Number of LBA Formats: 8 00:21:35.681 Current LBA Format: LBA Format #04 00:21:35.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:35.681 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:35.681 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:35.681 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:35.681 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:35.681 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:35.681 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:35.681 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:35.681 00:21:35.681 NVM Specific Namespace Data 00:21:35.681 =========================== 00:21:35.681 Logical Block Storage Tag Mask: 0 00:21:35.681 Protection Information Capabilities: 00:21:35.681 16b Guard Protection Information Storage Tag Support: No 00:21:35.681 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:35.681 Storage Tag Check Read Support: No 00:21:35.681 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:35.681 00:21:35.681 real 0m1.280s 00:21:35.681 user 0m0.056s 00:21:35.681 sys 0m1.240s 00:21:35.681 06:34:48 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.681 06:34:48 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.682 ************************************ 00:21:35.682 END TEST nvme_identify 00:21:35.682 ************************************ 00:21:35.682 06:34:48 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:35.682 06:34:48 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:21:35.682 06:34:48 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:35.682 06:34:48 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.682 06:34:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:35.682 ************************************ 00:21:35.682 START TEST nvme_perf 00:21:35.682 ************************************ 00:21:35.682 06:34:48 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:21:35.682 06:34:48 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:21:36.248 EAL: TSC is not safe to use in SMP mode 00:21:36.248 EAL: TSC is not invariant 00:21:36.248 [2024-07-23 06:34:48.616960] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:37.184 Initializing NVMe Controllers 00:21:37.184 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:37.184 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:37.184 Initialization complete. Launching workers. 00:21:37.184 ======================================================== 00:21:37.184 Latency(us) 00:21:37.184 Device Information : IOPS MiB/s Average min max 00:21:37.184 PCIE (0000:00:10.0) NSID 1 from core 0: 83348.17 976.74 1535.60 174.65 3308.67 00:21:37.184 ======================================================== 00:21:37.184 Total : 83348.17 976.74 1535.60 174.65 3308.67 00:21:37.184 00:21:37.184 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:37.184 ================================================================================= 00:21:37.184 1.00000% : 1228.803us 00:21:37.184 10.00000% : 1355.407us 00:21:37.184 25.00000% : 1437.327us 00:21:37.184 50.00000% : 1526.695us 00:21:37.184 75.00000% : 1623.510us 00:21:37.184 90.00000% : 1712.877us 00:21:37.184 95.00000% : 1772.455us 00:21:37.184 98.00000% : 1906.507us 00:21:37.184 99.00000% : 2263.977us 00:21:37.184 99.50000% : 2353.344us 00:21:37.184 99.90000% : 2844.865us 00:21:37.184 99.99000% : 3172.546us 00:21:37.184 99.99900% : 3321.492us 00:21:37.184 99.99990% : 3321.492us 00:21:37.184 99.99999% : 3321.492us 00:21:37.184 00:21:37.184 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:37.184 ============================================================================== 00:21:37.184 Range in us Cumulative IO count 00:21:37.184 174.080 - 175.011: 0.0024% ( 2) 00:21:37.184 191.768 - 192.699: 0.0048% ( 2) 00:21:37.184 194.560 - 195.491: 0.0060% ( 1) 00:21:37.184 215.041 - 215.971: 0.0084% ( 2) 00:21:37.184 215.971 - 216.902: 0.0096% ( 1) 00:21:37.184 216.902 - 217.833: 0.0132% ( 3) 00:21:37.184 217.833 - 218.764: 0.0156% ( 2) 00:21:37.184 218.764 - 219.695: 0.0168% ( 1) 00:21:37.184 219.695 - 220.626: 0.0192% ( 2) 00:21:37.184 222.488 - 223.419: 0.0216% ( 2) 00:21:37.184 223.419 - 224.350: 0.0228% ( 1) 00:21:37.184 224.350 - 225.281: 0.0240% ( 1) 00:21:37.184 226.211 - 227.142: 0.0252% ( 1) 00:21:37.184 253.208 - 255.070: 0.0276% ( 2) 00:21:37.184 256.932 - 258.793: 0.0300% ( 2) 00:21:37.184 258.793 - 260.655: 0.0312% ( 1) 00:21:37.184 260.655 - 262.517: 0.0360% ( 4) 00:21:37.184 262.517 - 264.379: 0.0384% ( 2) 00:21:37.184 264.379 - 266.241: 0.0396% ( 1) 00:21:37.184 322.095 - 323.957: 0.0420% ( 2) 00:21:37.184 325.819 - 327.681: 0.0432% ( 1) 00:21:37.184 327.681 - 329.543: 0.0444% ( 1) 00:21:37.184 329.543 - 331.404: 0.0468% ( 2) 00:21:37.184 331.404 - 333.266: 0.0492% ( 2) 00:21:37.184 333.266 - 335.128: 0.0504% ( 1) 00:21:37.184 335.128 - 336.990: 0.0516% ( 1) 00:21:37.184 336.990 - 338.852: 0.0528% ( 1) 00:21:37.184 338.852 - 340.714: 0.0540% ( 1) 00:21:37.184 348.161 - 350.023: 0.0552% ( 1) 00:21:37.184 350.023 - 351.885: 0.0564% ( 1) 00:21:37.184 351.885 - 353.746: 0.0576% ( 1) 00:21:37.184 856.439 - 860.162: 0.0588% ( 1) 00:21:37.184 860.162 - 863.886: 0.0624% ( 3) 00:21:37.184 1072.410 - 1079.857: 0.0636% ( 1) 00:21:37.184 1079.857 - 1087.305: 0.0696% ( 5) 00:21:37.184 1087.305 - 1094.752: 0.0852% ( 13) 00:21:37.184 1094.752 - 1102.199: 0.0936% ( 7) 00:21:37.184 1102.199 - 1109.646: 0.1080% ( 12) 00:21:37.184 1109.646 - 1117.094: 0.1260% ( 15) 00:21:37.184 1117.094 - 1124.541: 0.1404% ( 12) 00:21:37.184 1124.541 - 1131.988: 0.1632% ( 19) 00:21:37.184 1131.988 - 1139.436: 0.1812% ( 15) 00:21:37.184 1139.436 - 1146.883: 0.2016% ( 17) 00:21:37.184 1146.883 - 1154.330: 0.2351% ( 28) 00:21:37.184 1154.330 - 1161.778: 0.2735% ( 32) 00:21:37.184 1161.778 - 1169.225: 0.3239% ( 42) 00:21:37.184 1169.225 - 1176.672: 0.3803% ( 47) 00:21:37.184 1176.672 - 1184.119: 0.4307% ( 42) 00:21:37.184 1184.119 - 1191.567: 0.4847% ( 45) 00:21:37.184 1191.567 - 1199.014: 0.5591% ( 62) 00:21:37.184 1199.014 - 1206.461: 0.6502% ( 76) 00:21:37.184 1206.461 - 1213.909: 0.7630% ( 94) 00:21:37.184 1213.909 - 1221.356: 0.8710% ( 90) 00:21:37.184 1221.356 - 1228.803: 1.0209% ( 125) 00:21:37.184 1228.803 - 1236.250: 1.1901% ( 141) 00:21:37.184 1236.250 - 1243.698: 1.3929% ( 169) 00:21:37.184 1243.698 - 1251.145: 1.5860% ( 161) 00:21:37.184 1251.145 - 1258.592: 1.8571% ( 226) 00:21:37.184 1258.592 - 1266.040: 2.1727% ( 263) 00:21:37.184 1266.040 - 1273.487: 2.5662% ( 328) 00:21:37.184 1273.487 - 1280.934: 3.0053% ( 366) 00:21:37.184 1280.934 - 1288.381: 3.5211% ( 430) 00:21:37.184 1288.381 - 1295.829: 4.0646% ( 453) 00:21:37.184 1295.829 - 1303.276: 4.6584% ( 495) 00:21:37.184 1303.276 - 1310.723: 5.3411% ( 569) 00:21:37.184 1310.723 - 1318.171: 6.0573% ( 597) 00:21:37.184 1318.171 - 1325.618: 6.8347% ( 648) 00:21:37.184 1325.618 - 1333.065: 7.6949% ( 717) 00:21:37.184 1333.065 - 1340.513: 8.5731% ( 732) 00:21:37.184 1340.513 - 1347.960: 9.5352% ( 802) 00:21:37.184 1347.960 - 1355.407: 10.5958% ( 884) 00:21:37.184 1355.407 - 1362.854: 11.7103% ( 929) 00:21:37.184 1362.854 - 1370.302: 12.9172% ( 1006) 00:21:37.184 1370.302 - 1377.749: 14.1529% ( 1030) 00:21:37.184 1377.749 - 1385.196: 15.4810% ( 1107) 00:21:37.184 1385.196 - 1392.644: 16.8702% ( 1158) 00:21:37.184 1392.644 - 1400.091: 18.2691% ( 1166) 00:21:37.184 1400.091 - 1407.538: 19.8143% ( 1288) 00:21:37.184 1407.538 - 1414.985: 21.4051% ( 1326) 00:21:37.184 1414.985 - 1422.433: 23.0511% ( 1372) 00:21:37.184 1422.433 - 1429.880: 24.7403% ( 1408) 00:21:37.184 1429.880 - 1437.327: 26.5218% ( 1485) 00:21:37.184 1437.327 - 1444.775: 28.3742% ( 1544) 00:21:37.184 1444.775 - 1452.222: 30.2517% ( 1565) 00:21:37.184 1452.222 - 1459.669: 32.2372% ( 1655) 00:21:37.184 1459.669 - 1467.116: 34.2203% ( 1653) 00:21:37.184 1467.116 - 1474.564: 36.2346% ( 1679) 00:21:37.184 1474.564 - 1482.011: 38.2837% ( 1708) 00:21:37.184 1482.011 - 1489.458: 40.3328% ( 1708) 00:21:37.184 1489.458 - 1496.906: 42.4659% ( 1778) 00:21:37.184 1496.906 - 1504.353: 44.5821% ( 1764) 00:21:37.184 1504.353 - 1511.800: 46.7428% ( 1801) 00:21:37.184 1511.800 - 1519.248: 48.9383% ( 1830) 00:21:37.184 1519.248 - 1526.695: 51.0185% ( 1734) 00:21:37.184 1526.695 - 1534.142: 53.1036% ( 1738) 00:21:37.184 1534.142 - 1541.589: 55.1251% ( 1685) 00:21:37.184 1541.589 - 1549.037: 57.1646% ( 1700) 00:21:37.184 1549.037 - 1556.484: 59.1993% ( 1696) 00:21:37.184 1556.484 - 1563.931: 61.1944% ( 1663) 00:21:37.184 1563.931 - 1571.379: 63.1427% ( 1624) 00:21:37.184 1571.379 - 1578.826: 65.0419% ( 1583) 00:21:37.184 1578.826 - 1586.273: 66.8990% ( 1548) 00:21:37.184 1586.273 - 1593.720: 68.7190% ( 1517) 00:21:37.184 1593.720 - 1601.168: 70.5401% ( 1518) 00:21:37.184 1601.168 - 1608.615: 72.3061% ( 1472) 00:21:37.184 1608.615 - 1616.062: 73.9557% ( 1375) 00:21:37.184 1616.062 - 1623.510: 75.5357% ( 1317) 00:21:37.184 1623.510 - 1630.957: 77.1481% ( 1344) 00:21:37.184 1630.957 - 1638.404: 78.6813% ( 1278) 00:21:37.184 1638.404 - 1645.851: 80.1665% ( 1238) 00:21:37.184 1645.851 - 1653.299: 81.6062% ( 1200) 00:21:37.185 1653.299 - 1660.746: 82.9522% ( 1122) 00:21:37.185 1660.746 - 1668.193: 84.2155% ( 1053) 00:21:37.185 1668.193 - 1675.641: 85.4152% ( 1000) 00:21:37.185 1675.641 - 1683.088: 86.5645% ( 958) 00:21:37.185 1683.088 - 1690.535: 87.6263% ( 885) 00:21:37.185 1690.535 - 1697.983: 88.5980% ( 810) 00:21:37.185 1697.983 - 1705.430: 89.5146% ( 764) 00:21:37.185 1705.430 - 1712.877: 90.3412% ( 689) 00:21:37.185 1712.877 - 1720.324: 91.1282% ( 656) 00:21:37.185 1720.324 - 1727.772: 91.8780% ( 625) 00:21:37.185 1727.772 - 1735.219: 92.5738% ( 580) 00:21:37.185 1735.219 - 1742.666: 93.2541% ( 567) 00:21:37.185 1742.666 - 1750.114: 93.8563% ( 502) 00:21:37.185 1750.114 - 1757.561: 94.3794% ( 436) 00:21:37.185 1757.561 - 1765.008: 94.8485% ( 391) 00:21:37.185 1765.008 - 1772.455: 95.2648% ( 347) 00:21:37.185 1772.455 - 1779.903: 95.6451% ( 317) 00:21:37.185 1779.903 - 1787.350: 95.9798% ( 279) 00:21:37.185 1787.350 - 1794.797: 96.2737% ( 245) 00:21:37.185 1794.797 - 1802.245: 96.5377% ( 220) 00:21:37.185 1802.245 - 1809.692: 96.7476% ( 175) 00:21:37.185 1809.692 - 1817.139: 96.9144% ( 139) 00:21:37.185 1817.139 - 1824.586: 97.0703% ( 130) 00:21:37.185 1824.586 - 1832.034: 97.2275% ( 131) 00:21:37.185 1832.034 - 1839.481: 97.3655% ( 115) 00:21:37.185 1839.481 - 1846.928: 97.4842% ( 99) 00:21:37.185 1846.928 - 1854.376: 97.5622% ( 65) 00:21:37.185 1854.376 - 1861.823: 97.6570% ( 79) 00:21:37.185 1861.823 - 1869.270: 97.7314% ( 62) 00:21:37.185 1869.270 - 1876.718: 97.7925% ( 51) 00:21:37.185 1876.718 - 1884.165: 97.8573% ( 54) 00:21:37.185 1884.165 - 1891.612: 97.9125% ( 46) 00:21:37.185 1891.612 - 1899.059: 97.9689% ( 47) 00:21:37.185 1899.059 - 1906.507: 98.0157% ( 39) 00:21:37.185 1906.507 - 1921.401: 98.1057% ( 75) 00:21:37.185 1921.401 - 1936.296: 98.1597% ( 45) 00:21:37.185 1936.296 - 1951.190: 98.2112% ( 43) 00:21:37.185 1951.190 - 1966.085: 98.2604% ( 41) 00:21:37.185 1966.085 - 1980.980: 98.2916% ( 26) 00:21:37.185 1980.980 - 1995.874: 98.3300% ( 32) 00:21:37.185 1995.874 - 2010.769: 98.3660% ( 30) 00:21:37.185 2010.769 - 2025.663: 98.3864% ( 17) 00:21:37.185 2025.663 - 2040.558: 98.4164% ( 25) 00:21:37.185 2040.558 - 2055.453: 98.4380% ( 18) 00:21:37.185 2055.453 - 2070.347: 98.4512% ( 11) 00:21:37.185 2070.347 - 2085.242: 98.4788% ( 23) 00:21:37.185 2085.242 - 2100.136: 98.5040% ( 21) 00:21:37.185 2100.136 - 2115.031: 98.5244% ( 17) 00:21:37.185 2115.031 - 2129.925: 98.5568% ( 27) 00:21:37.185 2129.925 - 2144.820: 98.5951% ( 32) 00:21:37.185 2144.820 - 2159.715: 98.6227% ( 23) 00:21:37.185 2159.715 - 2174.609: 98.6563% ( 28) 00:21:37.185 2174.609 - 2189.504: 98.6863% ( 25) 00:21:37.185 2189.504 - 2204.398: 98.7403% ( 45) 00:21:37.185 2204.398 - 2219.293: 98.8063% ( 55) 00:21:37.185 2219.293 - 2234.188: 98.8771% ( 59) 00:21:37.185 2234.188 - 2249.082: 98.9683% ( 76) 00:21:37.185 2249.082 - 2263.977: 99.0546% ( 72) 00:21:37.185 2263.977 - 2278.871: 99.1422% ( 73) 00:21:37.185 2278.871 - 2293.766: 99.2202% ( 65) 00:21:37.185 2293.766 - 2308.660: 99.2958% ( 63) 00:21:37.185 2308.660 - 2323.555: 99.3810% ( 71) 00:21:37.185 2323.555 - 2338.450: 99.4577% ( 64) 00:21:37.185 2338.450 - 2353.344: 99.5261% ( 57) 00:21:37.185 2353.344 - 2368.239: 99.5765% ( 42) 00:21:37.185 2368.239 - 2383.133: 99.6257% ( 41) 00:21:37.185 2383.133 - 2398.028: 99.6605% ( 29) 00:21:37.185 2398.028 - 2412.923: 99.6989% ( 32) 00:21:37.185 2412.923 - 2427.817: 99.7253% ( 22) 00:21:37.185 2427.817 - 2442.712: 99.7481% ( 19) 00:21:37.185 2442.712 - 2457.606: 99.7613% ( 11) 00:21:37.185 2457.606 - 2472.501: 99.7805% ( 16) 00:21:37.185 2472.501 - 2487.395: 99.7949% ( 12) 00:21:37.185 2487.395 - 2502.290: 99.8068% ( 10) 00:21:37.185 2502.290 - 2517.185: 99.8188% ( 10) 00:21:37.185 2517.185 - 2532.079: 99.8296% ( 9) 00:21:37.185 2532.079 - 2546.974: 99.8428% ( 11) 00:21:37.185 2546.974 - 2561.868: 99.8524% ( 8) 00:21:37.185 2561.868 - 2576.763: 99.8572% ( 4) 00:21:37.185 2576.763 - 2591.658: 99.8584% ( 1) 00:21:37.185 2695.920 - 2710.814: 99.8620% ( 3) 00:21:37.185 2710.814 - 2725.709: 99.8656% ( 3) 00:21:37.185 2725.709 - 2740.603: 99.8692% ( 3) 00:21:37.185 2740.603 - 2755.498: 99.8800% ( 9) 00:21:37.185 2755.498 - 2770.393: 99.8848% ( 4) 00:21:37.185 2770.393 - 2785.287: 99.8884% ( 3) 00:21:37.185 2785.287 - 2800.182: 99.8908% ( 2) 00:21:37.185 2800.182 - 2815.076: 99.8956% ( 4) 00:21:37.185 2815.076 - 2829.971: 99.8968% ( 1) 00:21:37.185 2829.971 - 2844.865: 99.9004% ( 3) 00:21:37.185 2844.865 - 2859.760: 99.9040% ( 3) 00:21:37.185 2859.760 - 2874.655: 99.9088% ( 4) 00:21:37.185 2874.655 - 2889.549: 99.9124% ( 3) 00:21:37.185 2889.549 - 2904.444: 99.9172% ( 4) 00:21:37.185 2904.444 - 2919.338: 99.9208% ( 3) 00:21:37.185 2919.338 - 2934.233: 99.9244% ( 3) 00:21:37.185 2934.233 - 2949.128: 99.9268% ( 2) 00:21:37.185 2949.128 - 2964.022: 99.9316% ( 4) 00:21:37.185 2964.022 - 2978.917: 99.9352% ( 3) 00:21:37.185 2978.917 - 2993.811: 99.9400% ( 4) 00:21:37.185 2993.811 - 3008.706: 99.9436% ( 3) 00:21:37.185 3008.706 - 3023.600: 99.9472% ( 3) 00:21:37.185 3023.600 - 3038.495: 99.9508% ( 3) 00:21:37.185 3038.495 - 3053.390: 99.9568% ( 5) 00:21:37.185 3053.390 - 3068.284: 99.9616% ( 4) 00:21:37.185 3068.284 - 3083.179: 99.9664% ( 4) 00:21:37.185 3083.179 - 3098.073: 99.9700% ( 3) 00:21:37.185 3098.073 - 3112.968: 99.9760% ( 5) 00:21:37.185 3112.968 - 3127.863: 99.9808% ( 4) 00:21:37.185 3127.863 - 3142.757: 99.9856% ( 4) 00:21:37.185 3142.757 - 3157.652: 99.9892% ( 3) 00:21:37.185 3157.652 - 3172.546: 99.9928% ( 3) 00:21:37.185 3172.546 - 3187.441: 99.9952% ( 2) 00:21:37.185 3187.441 - 3202.335: 99.9964% ( 1) 00:21:37.185 3202.335 - 3217.230: 99.9988% ( 2) 00:21:37.185 3306.598 - 3321.492: 100.0000% ( 1) 00:21:37.185 00:21:37.185 06:34:49 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:21:37.793 EAL: TSC is not safe to use in SMP mode 00:21:37.793 EAL: TSC is not invariant 00:21:37.793 [2024-07-23 06:34:50.255765] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:39.167 Initializing NVMe Controllers 00:21:39.167 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:39.167 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:39.167 Initialization complete. Launching workers. 00:21:39.167 ======================================================== 00:21:39.167 Latency(us) 00:21:39.167 Device Information : IOPS MiB/s Average min max 00:21:39.167 PCIE (0000:00:10.0) NSID 1 from core 0: 71725.88 840.54 1784.48 742.28 10764.44 00:21:39.167 ======================================================== 00:21:39.167 Total : 71725.88 840.54 1784.48 742.28 10764.44 00:21:39.167 00:21:39.167 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:39.167 ================================================================================= 00:21:39.167 1.00000% : 1333.065us 00:21:39.167 10.00000% : 1586.273us 00:21:39.167 25.00000% : 1653.299us 00:21:39.167 50.00000% : 1720.324us 00:21:39.167 75.00000% : 1839.481us 00:21:39.167 90.00000% : 2115.031us 00:21:39.167 95.00000% : 2278.871us 00:21:39.167 98.00000% : 2472.501us 00:21:39.167 99.00000% : 2695.920us 00:21:39.167 99.50000% : 2904.444us 00:21:39.167 99.90000% : 3693.857us 00:21:39.167 99.99000% : 8162.232us 00:21:39.167 99.99900% : 10783.678us 00:21:39.167 99.99990% : 10783.678us 00:21:39.167 99.99999% : 10783.678us 00:21:39.167 00:21:39.167 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:39.167 ============================================================================== 00:21:39.167 Range in us Cumulative IO count 00:21:39.167 741.006 - 744.729: 0.0125% ( 9) 00:21:39.167 744.729 - 748.453: 0.0139% ( 1) 00:21:39.167 975.595 - 983.043: 0.0153% ( 1) 00:21:39.167 983.043 - 990.490: 0.0209% ( 4) 00:21:39.167 990.490 - 997.937: 0.0279% ( 5) 00:21:39.167 997.937 - 1005.384: 0.0390% ( 8) 00:21:39.167 1005.384 - 1012.832: 0.0543% ( 11) 00:21:39.167 1012.832 - 1020.279: 0.0585% ( 3) 00:21:39.167 1020.279 - 1027.726: 0.0655% ( 5) 00:21:39.167 1027.726 - 1035.174: 0.0725% ( 5) 00:21:39.167 1035.174 - 1042.621: 0.0892% ( 12) 00:21:39.167 1042.621 - 1050.068: 0.1198% ( 22) 00:21:39.167 1050.068 - 1057.515: 0.1296% ( 7) 00:21:39.167 1057.515 - 1064.963: 0.1407% ( 8) 00:21:39.167 1064.963 - 1072.410: 0.1533% ( 9) 00:21:39.167 1072.410 - 1079.857: 0.1602% ( 5) 00:21:39.167 1079.857 - 1087.305: 0.1672% ( 5) 00:21:39.167 1087.305 - 1094.752: 0.1728% ( 4) 00:21:39.167 1094.752 - 1102.199: 0.1769% ( 3) 00:21:39.167 1102.199 - 1109.646: 0.1797% ( 2) 00:21:39.167 1109.646 - 1117.094: 0.1811% ( 1) 00:21:39.167 1124.541 - 1131.988: 0.1825% ( 1) 00:21:39.167 1139.436 - 1146.883: 0.1965% ( 10) 00:21:39.167 1146.883 - 1154.330: 0.2132% ( 12) 00:21:39.167 1154.330 - 1161.778: 0.2271% ( 10) 00:21:39.167 1161.778 - 1169.225: 0.2424% ( 11) 00:21:39.167 1169.225 - 1176.672: 0.2814% ( 28) 00:21:39.167 1176.672 - 1184.119: 0.3107% ( 21) 00:21:39.167 1184.119 - 1191.567: 0.3483% ( 27) 00:21:39.167 1191.567 - 1199.014: 0.3734% ( 18) 00:21:39.167 1199.014 - 1206.461: 0.3832% ( 7) 00:21:39.167 1206.461 - 1213.909: 0.4027% ( 14) 00:21:39.167 1213.909 - 1221.356: 0.4277% ( 18) 00:21:39.167 1221.356 - 1228.803: 0.4528% ( 18) 00:21:39.167 1228.803 - 1236.250: 0.4876% ( 25) 00:21:39.167 1236.250 - 1243.698: 0.5113% ( 17) 00:21:39.167 1243.698 - 1251.145: 0.5406% ( 21) 00:21:39.167 1251.145 - 1258.592: 0.5880% ( 34) 00:21:39.167 1258.592 - 1266.040: 0.6353% ( 34) 00:21:39.167 1266.040 - 1273.487: 0.6827% ( 34) 00:21:39.167 1273.487 - 1280.934: 0.7203% ( 27) 00:21:39.167 1280.934 - 1288.381: 0.7468% ( 19) 00:21:39.167 1288.381 - 1295.829: 0.7997% ( 38) 00:21:39.167 1295.829 - 1303.276: 0.8304% ( 22) 00:21:39.167 1303.276 - 1310.723: 0.8597% ( 21) 00:21:39.167 1310.723 - 1318.171: 0.9001% ( 29) 00:21:39.167 1318.171 - 1325.618: 0.9572% ( 41) 00:21:39.167 1325.618 - 1333.065: 1.0157% ( 42) 00:21:39.167 1333.065 - 1340.513: 1.0868% ( 51) 00:21:39.167 1340.513 - 1347.960: 1.1578% ( 51) 00:21:39.167 1347.960 - 1355.407: 1.2344% ( 55) 00:21:39.167 1355.407 - 1362.854: 1.3584% ( 89) 00:21:39.167 1362.854 - 1370.302: 1.4880% ( 93) 00:21:39.167 1370.302 - 1377.749: 1.6274% ( 100) 00:21:39.167 1377.749 - 1385.196: 1.7500% ( 88) 00:21:39.167 1385.196 - 1392.644: 1.8308% ( 58) 00:21:39.167 1392.644 - 1400.091: 1.9116% ( 58) 00:21:39.167 1400.091 - 1407.538: 2.0035% ( 66) 00:21:39.167 1407.538 - 1414.985: 2.0997% ( 69) 00:21:39.167 1414.985 - 1422.433: 2.2251% ( 90) 00:21:39.167 1422.433 - 1429.880: 2.2906% ( 47) 00:21:39.167 1429.880 - 1437.327: 2.3658% ( 54) 00:21:39.167 1437.327 - 1444.775: 2.4647% ( 71) 00:21:39.167 1444.775 - 1452.222: 2.6082% ( 103) 00:21:39.167 1452.222 - 1459.669: 2.6695% ( 44) 00:21:39.167 1459.669 - 1467.116: 2.7921% ( 88) 00:21:39.167 1467.116 - 1474.564: 2.9370% ( 104) 00:21:39.167 1474.564 - 1482.011: 3.1210% ( 132) 00:21:39.167 1482.011 - 1489.458: 3.3285% ( 149) 00:21:39.167 1489.458 - 1496.906: 3.5069% ( 128) 00:21:39.167 1496.906 - 1504.353: 3.7479% ( 173) 00:21:39.167 1504.353 - 1511.800: 4.0363% ( 207) 00:21:39.167 1511.800 - 1519.248: 4.4864% ( 323) 00:21:39.167 1519.248 - 1526.695: 4.9531% ( 335) 00:21:39.167 1526.695 - 1534.142: 5.5188% ( 406) 00:21:39.167 1534.142 - 1541.589: 6.0009% ( 346) 00:21:39.167 1541.589 - 1549.037: 6.5540% ( 397) 00:21:39.167 1549.037 - 1556.484: 7.1392% ( 420) 00:21:39.167 1556.484 - 1563.931: 7.8888% ( 538) 00:21:39.167 1563.931 - 1571.379: 8.7498% ( 618) 00:21:39.167 1571.379 - 1578.826: 9.7948% ( 750) 00:21:39.167 1578.826 - 1586.273: 10.8105% ( 729) 00:21:39.167 1586.273 - 1593.720: 12.0059% ( 858) 00:21:39.167 1593.720 - 1601.168: 13.2264% ( 876) 00:21:39.167 1601.168 - 1608.615: 14.4790% ( 899) 00:21:39.167 1608.615 - 1616.062: 16.0116% ( 1100) 00:21:39.167 1616.062 - 1623.510: 17.8744% ( 1337) 00:21:39.167 1623.510 - 1630.957: 19.7930% ( 1377) 00:21:39.167 1630.957 - 1638.404: 21.8843% ( 1501) 00:21:39.167 1638.404 - 1645.851: 24.0494% ( 1554) 00:21:39.167 1645.851 - 1653.299: 26.4431% ( 1718) 00:21:39.167 1653.299 - 1660.746: 28.9984% ( 1834) 00:21:39.167 1660.746 - 1668.193: 31.5662% ( 1843) 00:21:39.167 1668.193 - 1675.641: 34.1577% ( 1860) 00:21:39.167 1675.641 - 1683.088: 36.8676% ( 1945) 00:21:39.167 1683.088 - 1690.535: 39.5218% ( 1905) 00:21:39.167 1690.535 - 1697.983: 42.0994% ( 1850) 00:21:39.167 1697.983 - 1705.430: 44.7090% ( 1873) 00:21:39.167 1705.430 - 1712.877: 47.4050% ( 1935) 00:21:39.167 1712.877 - 1720.324: 50.1261% ( 1953) 00:21:39.167 1720.324 - 1727.772: 52.6117% ( 1784) 00:21:39.167 1727.772 - 1735.219: 54.9928% ( 1709) 00:21:39.167 1735.219 - 1742.666: 57.1747% ( 1566) 00:21:39.167 1742.666 - 1750.114: 59.4095% ( 1604) 00:21:39.167 1750.114 - 1757.561: 61.5329% ( 1524) 00:21:39.167 1757.561 - 1765.008: 63.4347% ( 1365) 00:21:39.167 1765.008 - 1772.455: 65.1721% ( 1247) 00:21:39.167 1772.455 - 1779.903: 66.6741% ( 1078) 00:21:39.167 1779.903 - 1787.350: 68.1593% ( 1066) 00:21:39.167 1787.350 - 1794.797: 69.4969% ( 960) 00:21:39.167 1794.797 - 1802.245: 70.6533% ( 830) 00:21:39.167 1802.245 - 1809.692: 71.7108% ( 759) 00:21:39.167 1809.692 - 1817.139: 72.6917% ( 704) 00:21:39.167 1817.139 - 1824.586: 73.5263% ( 599) 00:21:39.167 1824.586 - 1832.034: 74.3358% ( 581) 00:21:39.167 1832.034 - 1839.481: 75.1035% ( 551) 00:21:39.167 1839.481 - 1846.928: 75.7179% ( 441) 00:21:39.167 1846.928 - 1854.376: 76.3268% ( 437) 00:21:39.167 1854.376 - 1861.823: 76.9551% ( 451) 00:21:39.167 1861.823 - 1869.270: 77.5751% ( 445) 00:21:39.167 1869.270 - 1876.718: 78.1924% ( 443) 00:21:39.167 1876.718 - 1884.165: 78.7371% ( 391) 00:21:39.167 1884.165 - 1891.612: 79.2554% ( 372) 00:21:39.167 1891.612 - 1899.059: 79.7208% ( 334) 00:21:39.167 1899.059 - 1906.507: 80.1416% ( 302) 00:21:39.167 1906.507 - 1921.401: 81.1573% ( 729) 00:21:39.167 1921.401 - 1936.296: 82.0044% ( 608) 00:21:39.167 1936.296 - 1951.190: 82.8334% ( 595) 00:21:39.167 1951.190 - 1966.085: 83.5203% ( 493) 00:21:39.167 1966.085 - 1980.980: 84.2768% ( 543) 00:21:39.167 1980.980 - 1995.874: 85.0417% ( 549) 00:21:39.167 1995.874 - 2010.769: 85.7356% ( 498) 00:21:39.167 2010.769 - 2025.663: 86.4671% ( 525) 00:21:39.167 2025.663 - 2040.558: 87.2097% ( 533) 00:21:39.167 2040.558 - 2055.453: 87.9202% ( 510) 00:21:39.167 2055.453 - 2070.347: 88.5124% ( 425) 00:21:39.167 2070.347 - 2085.242: 89.0795% ( 407) 00:21:39.167 2085.242 - 2100.136: 89.5908% ( 367) 00:21:39.167 2100.136 - 2115.031: 90.1356% ( 391) 00:21:39.168 2115.031 - 2129.925: 90.6511% ( 370) 00:21:39.168 2129.925 - 2144.820: 91.2363% ( 420) 00:21:39.168 2144.820 - 2159.715: 91.7532% ( 371) 00:21:39.168 2159.715 - 2174.609: 92.3565% ( 433) 00:21:39.168 2174.609 - 2189.504: 92.8525% ( 356) 00:21:39.168 2189.504 - 2204.398: 93.2607% ( 293) 00:21:39.168 2204.398 - 2219.293: 93.6898% ( 308) 00:21:39.168 2219.293 - 2234.188: 94.0549% ( 262) 00:21:39.168 2234.188 - 2249.082: 94.3948% ( 244) 00:21:39.168 2249.082 - 2263.977: 94.7431% ( 250) 00:21:39.168 2263.977 - 2278.871: 95.1012% ( 257) 00:21:39.168 2278.871 - 2293.766: 95.5108% ( 294) 00:21:39.168 2293.766 - 2308.660: 95.8369% ( 234) 00:21:39.168 2308.660 - 2323.555: 96.1490% ( 224) 00:21:39.168 2323.555 - 2338.450: 96.4081% ( 186) 00:21:39.168 2338.450 - 2353.344: 96.6617% ( 182) 00:21:39.168 2353.344 - 2368.239: 96.9041% ( 174) 00:21:39.168 2368.239 - 2383.133: 97.1201% ( 155) 00:21:39.168 2383.133 - 2398.028: 97.2775% ( 113) 00:21:39.168 2398.028 - 2412.923: 97.4210% ( 103) 00:21:39.168 2412.923 - 2427.817: 97.5534% ( 95) 00:21:39.168 2427.817 - 2442.712: 97.7234% ( 122) 00:21:39.168 2442.712 - 2457.606: 97.8502% ( 91) 00:21:39.168 2457.606 - 2472.501: 98.0271% ( 127) 00:21:39.168 2472.501 - 2487.395: 98.1637% ( 98) 00:21:39.168 2487.395 - 2502.290: 98.3141% ( 108) 00:21:39.168 2502.290 - 2517.185: 98.4200% ( 76) 00:21:39.168 2517.185 - 2532.079: 98.4813% ( 44) 00:21:39.168 2532.079 - 2546.974: 98.5398% ( 42) 00:21:39.168 2546.974 - 2561.868: 98.5830% ( 31) 00:21:39.168 2561.868 - 2576.763: 98.6248% ( 30) 00:21:39.168 2576.763 - 2591.658: 98.6820% ( 41) 00:21:39.168 2591.658 - 2606.552: 98.7474% ( 47) 00:21:39.168 2606.552 - 2621.447: 98.7948% ( 34) 00:21:39.168 2621.447 - 2636.341: 98.8422% ( 34) 00:21:39.168 2636.341 - 2651.236: 98.8868% ( 32) 00:21:39.168 2651.236 - 2666.130: 98.9341% ( 34) 00:21:39.168 2666.130 - 2681.025: 98.9829% ( 35) 00:21:39.168 2681.025 - 2695.920: 99.0442% ( 44) 00:21:39.168 2695.920 - 2710.814: 99.0916% ( 34) 00:21:39.168 2710.814 - 2725.709: 99.1348% ( 31) 00:21:39.168 2725.709 - 2740.603: 99.1640% ( 21) 00:21:39.168 2740.603 - 2755.498: 99.2016% ( 27) 00:21:39.168 2755.498 - 2770.393: 99.2212% ( 14) 00:21:39.168 2770.393 - 2785.287: 99.2365% ( 11) 00:21:39.168 2785.287 - 2800.182: 99.2671% ( 22) 00:21:39.168 2800.182 - 2815.076: 99.2880% ( 15) 00:21:39.168 2815.076 - 2829.971: 99.2992% ( 8) 00:21:39.168 2829.971 - 2844.865: 99.3284% ( 21) 00:21:39.168 2844.865 - 2859.760: 99.3856% ( 41) 00:21:39.168 2859.760 - 2874.655: 99.4343% ( 35) 00:21:39.168 2874.655 - 2889.549: 99.4817% ( 34) 00:21:39.168 2889.549 - 2904.444: 99.5207% ( 28) 00:21:39.168 2904.444 - 2919.338: 99.5472% ( 19) 00:21:39.168 2919.338 - 2934.233: 99.5611% ( 10) 00:21:39.168 2934.233 - 2949.128: 99.5639% ( 2) 00:21:39.168 2949.128 - 2964.022: 99.5764% ( 9) 00:21:39.168 2964.022 - 2978.917: 99.5890% ( 9) 00:21:39.168 2978.917 - 2993.811: 99.5987% ( 7) 00:21:39.168 2993.811 - 3008.706: 99.6196% ( 15) 00:21:39.168 3008.706 - 3023.600: 99.6252% ( 4) 00:21:39.168 3023.600 - 3038.495: 99.6280% ( 2) 00:21:39.168 3038.495 - 3053.390: 99.6336% ( 4) 00:21:39.168 3053.390 - 3068.284: 99.6364% ( 2) 00:21:39.168 3068.284 - 3083.179: 99.6391% ( 2) 00:21:39.168 3083.179 - 3098.073: 99.6419% ( 2) 00:21:39.168 3098.073 - 3112.968: 99.6461% ( 3) 00:21:39.168 3112.968 - 3127.863: 99.6503% ( 3) 00:21:39.168 3127.863 - 3142.757: 99.6517% ( 1) 00:21:39.168 3142.757 - 3157.652: 99.6600% ( 6) 00:21:39.168 3157.652 - 3172.546: 99.6642% ( 3) 00:21:39.168 3172.546 - 3187.441: 99.6782% ( 10) 00:21:39.168 3187.441 - 3202.335: 99.6893% ( 8) 00:21:39.168 3202.335 - 3217.230: 99.7004% ( 8) 00:21:39.168 3217.230 - 3232.125: 99.7060% ( 4) 00:21:39.168 3232.125 - 3247.019: 99.7116% ( 4) 00:21:39.168 3247.019 - 3261.914: 99.7158% ( 3) 00:21:39.168 3261.914 - 3276.808: 99.7241% ( 6) 00:21:39.168 3276.808 - 3291.703: 99.7367% ( 9) 00:21:39.168 3291.703 - 3306.598: 99.7408% ( 3) 00:21:39.168 3306.598 - 3321.492: 99.7548% ( 10) 00:21:39.168 3321.492 - 3336.387: 99.7617% ( 5) 00:21:39.168 3336.387 - 3351.281: 99.7659% ( 3) 00:21:39.168 3351.281 - 3366.176: 99.7687% ( 2) 00:21:39.168 3366.176 - 3381.070: 99.7715% ( 2) 00:21:39.168 3381.070 - 3395.965: 99.7757% ( 3) 00:21:39.168 3395.965 - 3410.860: 99.7785% ( 2) 00:21:39.168 3410.860 - 3425.754: 99.7952% ( 12) 00:21:39.168 3425.754 - 3440.649: 99.8119% ( 12) 00:21:39.168 3440.649 - 3455.543: 99.8217% ( 7) 00:21:39.168 3455.543 - 3470.438: 99.8314% ( 7) 00:21:39.168 3470.438 - 3485.333: 99.8328% ( 1) 00:21:39.428 3485.333 - 3500.227: 99.8342% ( 1) 00:21:39.428 3500.227 - 3515.122: 99.8356% ( 1) 00:21:39.428 3515.122 - 3530.016: 99.8384% ( 2) 00:21:39.428 3530.016 - 3544.911: 99.8440% ( 4) 00:21:39.428 3559.805 - 3574.700: 99.8509% ( 5) 00:21:39.428 3574.700 - 3589.595: 99.8551% ( 3) 00:21:39.428 3589.595 - 3604.489: 99.8565% ( 1) 00:21:39.428 3619.384 - 3634.278: 99.8690% ( 9) 00:21:39.428 3634.278 - 3649.173: 99.8704% ( 1) 00:21:39.428 3649.173 - 3664.068: 99.8746% ( 3) 00:21:39.428 3664.068 - 3678.962: 99.8885% ( 10) 00:21:39.428 3678.962 - 3693.857: 99.9108% ( 16) 00:21:39.428 3693.857 - 3708.751: 99.9150% ( 3) 00:21:39.428 3708.751 - 3723.646: 99.9248% ( 7) 00:21:39.428 3723.646 - 3738.540: 99.9373% ( 9) 00:21:39.428 3738.540 - 3753.435: 99.9443% ( 5) 00:21:39.428 3753.435 - 3768.330: 99.9457% ( 1) 00:21:39.428 3783.224 - 3798.119: 99.9471% ( 1) 00:21:39.428 3798.119 - 3813.013: 99.9484% ( 1) 00:21:39.428 3813.013 - 3842.803: 99.9554% ( 5) 00:21:39.428 3842.803 - 3872.592: 99.9568% ( 1) 00:21:39.428 3902.381 - 3932.170: 99.9693% ( 9) 00:21:39.428 3932.170 - 3961.959: 99.9707% ( 1) 00:21:39.428 4200.273 - 4230.062: 99.9735% ( 2) 00:21:39.428 4289.640 - 4319.429: 99.9749% ( 1) 00:21:39.428 4319.429 - 4349.218: 99.9791% ( 3) 00:21:39.428 5183.315 - 5213.104: 99.9819% ( 2) 00:21:39.428 6434.460 - 6464.249: 99.9847% ( 2) 00:21:39.428 7983.497 - 8043.075: 99.9861% ( 1) 00:21:39.428 8043.075 - 8102.653: 99.9889% ( 2) 00:21:39.428 8102.653 - 8162.232: 99.9902% ( 1) 00:21:39.428 10724.100 - 10783.678: 100.0000% ( 7) 00:21:39.428 00:21:39.428 06:34:51 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:21:39.428 00:21:39.428 real 0m3.775s 00:21:39.428 user 0m2.558s 00:21:39.428 sys 0m1.214s 00:21:39.428 06:34:51 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.428 06:34:51 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:21:39.428 ************************************ 00:21:39.428 END TEST nvme_perf 00:21:39.428 ************************************ 00:21:39.428 06:34:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:39.428 06:34:51 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:39.428 06:34:51 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:39.428 06:34:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.428 06:34:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:39.428 ************************************ 00:21:39.428 START TEST nvme_hello_world 00:21:39.428 ************************************ 00:21:39.428 06:34:51 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:39.996 EAL: TSC is not safe to use in SMP mode 00:21:39.996 EAL: TSC is not invariant 00:21:39.996 [2024-07-23 06:34:52.437096] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:39.996 Initializing NVMe Controllers 00:21:39.996 Attaching to 0000:00:10.0 00:21:39.996 Attached to 0000:00:10.0 00:21:39.996 Namespace ID: 1 size: 5GB 00:21:39.996 Initialization complete. 00:21:39.996 INFO: using host memory buffer for IO 00:21:39.996 Hello world! 00:21:39.996 00:21:39.996 real 0m0.590s 00:21:39.996 user 0m0.020s 00:21:39.996 sys 0m0.570s 00:21:39.996 06:34:52 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.996 06:34:52 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:39.996 ************************************ 00:21:39.996 END TEST nvme_hello_world 00:21:39.996 ************************************ 00:21:40.262 06:34:52 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:40.262 06:34:52 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:40.262 06:34:52 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.262 06:34:52 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.262 06:34:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:40.262 ************************************ 00:21:40.262 START TEST nvme_sgl 00:21:40.262 ************************************ 00:21:40.262 06:34:52 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:40.844 EAL: TSC is not safe to use in SMP mode 00:21:40.844 EAL: TSC is not invariant 00:21:40.844 [2024-07-23 06:34:53.084894] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:40.844 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:21:40.844 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:21:40.844 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:21:40.844 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:21:40.844 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:21:40.844 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:21:40.844 NVMe Readv/Writev Request test 00:21:40.844 Attaching to 0000:00:10.0 00:21:40.844 Attached to 0000:00:10.0 00:21:40.844 0000:00:10.0: build_io_request_2 test passed 00:21:40.844 0000:00:10.0: build_io_request_4 test passed 00:21:40.844 0000:00:10.0: build_io_request_5 test passed 00:21:40.844 0000:00:10.0: build_io_request_6 test passed 00:21:40.844 0000:00:10.0: build_io_request_7 test passed 00:21:40.844 0000:00:10.0: build_io_request_10 test passed 00:21:40.844 Cleaning up... 00:21:40.844 00:21:40.844 real 0m0.597s 00:21:40.844 user 0m0.016s 00:21:40.844 sys 0m0.581s 00:21:40.844 06:34:53 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.844 06:34:53 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:21:40.844 ************************************ 00:21:40.844 END TEST nvme_sgl 00:21:40.844 ************************************ 00:21:40.844 06:34:53 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:40.844 06:34:53 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:40.844 06:34:53 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.844 06:34:53 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.844 06:34:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:40.844 ************************************ 00:21:40.844 START TEST nvme_e2edp 00:21:40.844 ************************************ 00:21:40.844 06:34:53 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:41.412 EAL: TSC is not safe to use in SMP mode 00:21:41.412 EAL: TSC is not invariant 00:21:41.412 [2024-07-23 06:34:53.727272] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:41.412 NVMe Write/Read with End-to-End data protection test 00:21:41.412 Attaching to 0000:00:10.0 00:21:41.412 Attached to 0000:00:10.0 00:21:41.412 Cleaning up... 00:21:41.412 00:21:41.412 real 0m0.592s 00:21:41.412 user 0m0.009s 00:21:41.412 sys 0m0.583s 00:21:41.412 06:34:53 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.412 ************************************ 00:21:41.412 END TEST nvme_e2edp 00:21:41.412 ************************************ 00:21:41.412 06:34:53 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:21:41.412 06:34:53 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:41.412 06:34:53 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:41.412 06:34:53 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:41.412 06:34:53 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.412 06:34:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:41.412 ************************************ 00:21:41.412 START TEST nvme_reserve 00:21:41.412 ************************************ 00:21:41.412 06:34:53 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:41.979 EAL: TSC is not safe to use in SMP mode 00:21:41.979 EAL: TSC is not invariant 00:21:41.979 [2024-07-23 06:34:54.381114] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:41.979 ===================================================== 00:21:41.979 NVMe Controller at PCI bus 0, device 16, function 0 00:21:41.979 ===================================================== 00:21:41.979 Reservations: Not Supported 00:21:41.979 Reservation test passed 00:21:41.979 00:21:41.979 real 0m0.605s 00:21:41.979 user 0m0.000s 00:21:41.979 sys 0m0.605s 00:21:41.979 06:34:54 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.979 ************************************ 00:21:41.979 END TEST nvme_reserve 00:21:41.979 ************************************ 00:21:41.979 06:34:54 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:21:41.979 06:34:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:41.979 06:34:54 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:41.979 06:34:54 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:41.979 06:34:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.979 06:34:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:41.979 ************************************ 00:21:41.979 START TEST nvme_err_injection 00:21:41.979 ************************************ 00:21:41.979 06:34:54 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:42.546 EAL: TSC is not safe to use in SMP mode 00:21:42.546 EAL: TSC is not invariant 00:21:42.546 [2024-07-23 06:34:55.007113] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:42.546 NVMe Error Injection test 00:21:42.546 Attaching to 0000:00:10.0 00:21:42.546 Attached to 0000:00:10.0 00:21:42.546 0000:00:10.0: get features failed as expected 00:21:42.546 0000:00:10.0: get features successfully as expected 00:21:42.546 0000:00:10.0: read failed as expected 00:21:42.546 0000:00:10.0: read successfully as expected 00:21:42.546 Cleaning up... 00:21:42.546 00:21:42.546 real 0m0.594s 00:21:42.546 user 0m0.024s 00:21:42.546 sys 0m0.572s 00:21:42.546 06:34:55 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.546 06:34:55 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:21:42.546 ************************************ 00:21:42.546 END TEST nvme_err_injection 00:21:42.546 ************************************ 00:21:42.805 06:34:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:42.805 06:34:55 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:42.805 06:34:55 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:21:42.805 06:34:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.805 06:34:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:42.805 ************************************ 00:21:42.805 START TEST nvme_overhead 00:21:42.805 ************************************ 00:21:42.805 06:34:55 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:43.376 EAL: TSC is not safe to use in SMP mode 00:21:43.376 EAL: TSC is not invariant 00:21:43.376 [2024-07-23 06:34:55.684726] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:44.333 Initializing NVMe Controllers 00:21:44.333 Attaching to 0000:00:10.0 00:21:44.333 Attached to 0000:00:10.0 00:21:44.333 Initialization complete. Launching workers. 00:21:44.333 submit (in ns) avg, min, max = 10269.8, 7147.3, 49741.5 00:21:44.333 complete (in ns) avg, min, max = 7947.9, 5400.0, 103713.0 00:21:44.333 00:21:44.333 Submit histogram 00:21:44.333 ================ 00:21:44.333 Range in us Cumulative Count 00:21:44.333 7.127 - 7.156: 0.0103% ( 1) 00:21:44.333 7.215 - 7.244: 0.0205% ( 1) 00:21:44.333 7.244 - 7.273: 0.0513% ( 3) 00:21:44.333 7.273 - 7.302: 0.0718% ( 2) 00:21:44.333 7.302 - 7.331: 0.1538% ( 8) 00:21:44.333 7.331 - 7.360: 0.2256% ( 7) 00:21:44.333 7.360 - 7.389: 0.3077% ( 8) 00:21:44.333 7.389 - 7.418: 0.3897% ( 8) 00:21:44.333 7.418 - 7.447: 0.4410% ( 5) 00:21:44.333 7.447 - 7.505: 0.8000% ( 35) 00:21:44.333 7.505 - 7.564: 1.2103% ( 40) 00:21:44.333 7.564 - 7.622: 1.8872% ( 66) 00:21:44.333 7.622 - 7.680: 2.9026% ( 99) 00:21:44.333 7.680 - 7.738: 4.0821% ( 115) 00:21:44.333 7.738 - 7.796: 5.2103% ( 110) 00:21:44.333 7.796 - 7.855: 6.3385% ( 110) 00:21:44.333 7.855 - 7.913: 7.1385% ( 78) 00:21:44.333 7.913 - 7.971: 7.8154% ( 66) 00:21:44.333 7.971 - 8.029: 8.4513% ( 62) 00:21:44.333 8.029 - 8.087: 8.9231% ( 46) 00:21:44.333 8.087 - 8.145: 9.3231% ( 39) 00:21:44.333 8.145 - 8.204: 9.8974% ( 56) 00:21:44.333 8.204 - 8.262: 10.7692% ( 85) 00:21:44.333 8.262 - 8.320: 11.5692% ( 78) 00:21:44.333 8.320 - 8.378: 12.8410% ( 124) 00:21:44.333 8.378 - 8.436: 14.0308% ( 116) 00:21:44.333 8.436 - 8.495: 15.2410% ( 118) 00:21:44.333 8.495 - 8.553: 16.8513% ( 157) 00:21:44.333 8.553 - 8.611: 19.6923% ( 277) 00:21:44.333 8.611 - 8.669: 22.8103% ( 304) 00:21:44.333 8.669 - 8.727: 25.6513% ( 277) 00:21:44.333 8.727 - 8.785: 28.4718% ( 275) 00:21:44.333 8.785 - 8.844: 31.7641% ( 321) 00:21:44.333 8.844 - 8.902: 36.2462% ( 437) 00:21:44.333 8.902 - 8.960: 40.5846% ( 423) 00:21:44.333 8.960 - 9.018: 44.5949% ( 391) 00:21:44.333 9.018 - 9.076: 50.1128% ( 538) 00:21:44.333 9.076 - 9.135: 57.0462% ( 676) 00:21:44.333 9.135 - 9.193: 63.3333% ( 613) 00:21:44.333 9.193 - 9.251: 67.4769% ( 404) 00:21:44.333 9.251 - 9.309: 69.9487% ( 241) 00:21:44.333 9.309 - 9.367: 71.4256% ( 144) 00:21:44.333 9.367 - 9.425: 72.5333% ( 108) 00:21:44.333 9.425 - 9.484: 73.4769% ( 92) 00:21:44.333 9.484 - 9.542: 73.9179% ( 43) 00:21:44.333 9.542 - 9.600: 74.2256% ( 30) 00:21:44.333 9.600 - 9.658: 74.4513% ( 22) 00:21:44.333 9.658 - 9.716: 74.6256% ( 17) 00:21:44.333 9.716 - 9.775: 74.8410% ( 21) 00:21:44.333 9.775 - 9.833: 74.9436% ( 10) 00:21:44.333 9.833 - 9.891: 75.0154% ( 7) 00:21:44.333 9.891 - 9.949: 75.0667% ( 5) 00:21:44.333 9.949 - 10.007: 75.1282% ( 6) 00:21:44.333 10.007 - 10.065: 75.1897% ( 6) 00:21:44.333 10.065 - 10.124: 75.6308% ( 43) 00:21:44.333 10.124 - 10.182: 76.4205% ( 77) 00:21:44.333 10.182 - 10.240: 77.8564% ( 140) 00:21:44.333 10.240 - 10.298: 79.5179% ( 162) 00:21:44.333 10.298 - 10.356: 80.9436% ( 139) 00:21:44.333 10.356 - 10.415: 81.7231% ( 76) 00:21:44.333 10.415 - 10.473: 82.2154% ( 48) 00:21:44.333 10.473 - 10.531: 82.4103% ( 19) 00:21:44.333 10.531 - 10.589: 82.5128% ( 10) 00:21:44.333 10.589 - 10.647: 82.5846% ( 7) 00:21:44.333 10.647 - 10.705: 82.6462% ( 6) 00:21:44.333 10.705 - 10.764: 82.6872% ( 4) 00:21:44.333 10.764 - 10.822: 82.7590% ( 7) 00:21:44.333 10.822 - 10.880: 82.8000% ( 4) 00:21:44.333 10.938 - 10.996: 82.8718% ( 7) 00:21:44.333 10.996 - 11.055: 83.1487% ( 27) 00:21:44.333 11.055 - 11.113: 83.5795% ( 42) 00:21:44.333 11.113 - 11.171: 84.2359% ( 64) 00:21:44.333 11.171 - 11.229: 84.6256% ( 38) 00:21:44.333 11.229 - 11.287: 84.7897% ( 16) 00:21:44.333 11.287 - 11.345: 84.9026% ( 11) 00:21:44.333 11.345 - 11.404: 85.1487% ( 24) 00:21:44.333 11.404 - 11.462: 85.4256% ( 27) 00:21:44.333 11.462 - 11.520: 85.7128% ( 28) 00:21:44.333 11.520 - 11.578: 85.8564% ( 14) 00:21:44.333 11.578 - 11.636: 86.0308% ( 17) 00:21:44.333 11.636 - 11.695: 86.1128% ( 8) 00:21:44.333 11.695 - 11.753: 86.1538% ( 4) 00:21:44.333 11.753 - 11.811: 86.2872% ( 13) 00:21:44.333 11.811 - 11.869: 86.3385% ( 5) 00:21:44.333 11.869 - 11.927: 86.4000% ( 6) 00:21:44.333 11.927 - 11.985: 86.5846% ( 18) 00:21:44.333 11.985 - 12.044: 86.6769% ( 9) 00:21:44.333 12.044 - 12.102: 86.7795% ( 10) 00:21:44.333 12.102 - 12.160: 86.8615% ( 8) 00:21:44.333 12.160 - 12.218: 86.9744% ( 11) 00:21:44.333 12.218 - 12.276: 87.1077% ( 13) 00:21:44.333 12.276 - 12.335: 87.2000% ( 9) 00:21:44.333 12.335 - 12.393: 87.3333% ( 13) 00:21:44.333 12.393 - 12.451: 87.4769% ( 14) 00:21:44.333 12.451 - 12.509: 87.5590% ( 8) 00:21:44.333 12.509 - 12.567: 87.6821% ( 12) 00:21:44.333 12.567 - 12.625: 87.8256% ( 14) 00:21:44.333 12.625 - 12.684: 87.9282% ( 10) 00:21:44.333 12.684 - 12.742: 88.0410% ( 11) 00:21:44.333 12.742 - 12.800: 88.2564% ( 21) 00:21:44.333 12.800 - 12.858: 88.4000% ( 14) 00:21:44.333 12.858 - 12.916: 88.4821% ( 8) 00:21:44.333 12.916 - 12.975: 88.5436% ( 6) 00:21:44.333 12.975 - 13.033: 88.6462% ( 10) 00:21:44.333 13.033 - 13.091: 88.7179% ( 7) 00:21:44.333 13.091 - 13.149: 88.8513% ( 13) 00:21:44.333 13.149 - 13.207: 88.9538% ( 10) 00:21:44.333 13.207 - 13.265: 89.1590% ( 20) 00:21:44.333 13.265 - 13.324: 89.2615% ( 10) 00:21:44.333 13.324 - 13.382: 89.4256% ( 16) 00:21:44.333 13.382 - 13.440: 89.5282% ( 10) 00:21:44.333 13.440 - 13.498: 89.5897% ( 6) 00:21:44.333 13.498 - 13.556: 89.6821% ( 9) 00:21:44.333 13.556 - 13.615: 89.7641% ( 8) 00:21:44.333 13.615 - 13.673: 89.8667% ( 10) 00:21:44.333 13.673 - 13.731: 89.9590% ( 9) 00:21:44.333 13.731 - 13.789: 90.1026% ( 14) 00:21:44.333 13.789 - 13.847: 90.2359% ( 13) 00:21:44.333 13.847 - 13.905: 90.3385% ( 10) 00:21:44.333 13.905 - 13.964: 90.4615% ( 12) 00:21:44.333 13.964 - 14.022: 90.5231% ( 6) 00:21:44.333 14.022 - 14.080: 90.6667% ( 14) 00:21:44.333 14.080 - 14.138: 90.7487% ( 8) 00:21:44.333 14.138 - 14.196: 90.8205% ( 7) 00:21:44.333 14.196 - 14.255: 90.9641% ( 14) 00:21:44.333 14.255 - 14.313: 91.0667% ( 10) 00:21:44.333 14.313 - 14.371: 91.1897% ( 12) 00:21:44.333 14.371 - 14.429: 91.2923% ( 10) 00:21:44.333 14.429 - 14.487: 91.4667% ( 17) 00:21:44.333 14.487 - 14.545: 91.5487% ( 8) 00:21:44.334 14.545 - 14.604: 91.7026% ( 15) 00:21:44.334 14.604 - 14.662: 91.9385% ( 23) 00:21:44.334 14.662 - 14.720: 92.0205% ( 8) 00:21:44.334 14.720 - 14.778: 92.1026% ( 8) 00:21:44.334 14.778 - 14.836: 92.1641% ( 6) 00:21:44.334 14.836 - 14.895: 92.2974% ( 13) 00:21:44.334 14.895 - 15.011: 92.5026% ( 20) 00:21:44.334 15.011 - 15.127: 92.6872% ( 18) 00:21:44.334 15.127 - 15.244: 92.9231% ( 23) 00:21:44.334 15.244 - 15.360: 93.0872% ( 16) 00:21:44.334 15.360 - 15.476: 93.2000% ( 11) 00:21:44.334 15.476 - 15.593: 93.3744% ( 17) 00:21:44.334 15.593 - 15.709: 93.5385% ( 16) 00:21:44.334 15.709 - 15.825: 93.6615% ( 12) 00:21:44.334 15.825 - 15.942: 93.7949% ( 13) 00:21:44.334 15.942 - 16.058: 93.8359% ( 4) 00:21:44.334 16.058 - 16.175: 93.9077% ( 7) 00:21:44.334 16.175 - 16.291: 93.9795% ( 7) 00:21:44.334 16.291 - 16.407: 94.0821% ( 10) 00:21:44.334 16.407 - 16.524: 94.1436% ( 6) 00:21:44.334 16.524 - 16.640: 94.1641% ( 2) 00:21:44.334 16.640 - 16.756: 94.2256% ( 6) 00:21:44.334 16.756 - 16.873: 94.2872% ( 6) 00:21:44.334 16.873 - 16.989: 94.3077% ( 2) 00:21:44.334 16.989 - 17.105: 94.3487% ( 4) 00:21:44.334 17.105 - 17.222: 94.4205% ( 7) 00:21:44.334 17.222 - 17.338: 94.4615% ( 4) 00:21:44.334 17.338 - 17.455: 94.5436% ( 8) 00:21:44.334 17.455 - 17.571: 94.6051% ( 6) 00:21:44.334 17.571 - 17.687: 94.6564% ( 5) 00:21:44.334 17.687 - 17.804: 94.6769% ( 2) 00:21:44.334 17.804 - 17.920: 94.7077% ( 3) 00:21:44.334 17.920 - 18.036: 94.7179% ( 1) 00:21:44.334 18.036 - 18.153: 94.7487% ( 3) 00:21:44.334 18.153 - 18.269: 94.7692% ( 2) 00:21:44.334 18.269 - 18.386: 94.8103% ( 4) 00:21:44.334 18.386 - 18.502: 94.8308% ( 2) 00:21:44.334 18.502 - 18.618: 94.8615% ( 3) 00:21:44.334 18.735 - 18.851: 94.9026% ( 4) 00:21:44.334 18.851 - 18.967: 94.9846% ( 8) 00:21:44.334 18.967 - 19.084: 95.0154% ( 3) 00:21:44.334 19.084 - 19.200: 95.0462% ( 3) 00:21:44.334 19.200 - 19.316: 95.0667% ( 2) 00:21:44.334 19.316 - 19.433: 95.0872% ( 2) 00:21:44.334 19.433 - 19.549: 95.1487% ( 6) 00:21:44.334 19.549 - 19.666: 95.1590% ( 1) 00:21:44.334 19.666 - 19.782: 95.1795% ( 2) 00:21:44.334 19.782 - 19.898: 95.2103% ( 3) 00:21:44.334 19.898 - 20.015: 95.2308% ( 2) 00:21:44.334 20.015 - 20.131: 95.2615% ( 3) 00:21:44.334 20.131 - 20.247: 95.3333% ( 7) 00:21:44.334 20.247 - 20.364: 95.3949% ( 6) 00:21:44.334 20.364 - 20.480: 95.4462% ( 5) 00:21:44.334 20.480 - 20.596: 95.4769% ( 3) 00:21:44.334 20.596 - 20.713: 95.5385% ( 6) 00:21:44.334 20.713 - 20.829: 95.5487% ( 1) 00:21:44.334 20.829 - 20.946: 95.5590% ( 1) 00:21:44.334 20.946 - 21.062: 95.5795% ( 2) 00:21:44.334 21.062 - 21.178: 95.5897% ( 1) 00:21:44.334 21.178 - 21.295: 95.6103% ( 2) 00:21:44.334 21.295 - 21.411: 95.6205% ( 1) 00:21:44.334 21.411 - 21.527: 95.6410% ( 2) 00:21:44.334 21.527 - 21.644: 95.6615% ( 2) 00:21:44.334 21.644 - 21.760: 95.6821% ( 2) 00:21:44.334 21.876 - 21.993: 95.6923% ( 1) 00:21:44.334 21.993 - 22.109: 95.7231% ( 3) 00:21:44.334 22.109 - 22.226: 95.7436% ( 2) 00:21:44.334 22.226 - 22.342: 95.7641% ( 2) 00:21:44.334 22.342 - 22.458: 95.7846% ( 2) 00:21:44.334 22.458 - 22.575: 95.8154% ( 3) 00:21:44.334 22.575 - 22.691: 95.8769% ( 6) 00:21:44.334 22.691 - 22.807: 95.9692% ( 9) 00:21:44.334 22.807 - 22.924: 96.0205% ( 5) 00:21:44.334 22.924 - 23.040: 96.0718% ( 5) 00:21:44.334 23.040 - 23.156: 96.1538% ( 8) 00:21:44.334 23.156 - 23.273: 96.2667% ( 11) 00:21:44.334 23.273 - 23.389: 96.3692% ( 10) 00:21:44.334 23.389 - 23.506: 96.4718% ( 10) 00:21:44.334 23.506 - 23.622: 96.6154% ( 14) 00:21:44.334 23.622 - 23.738: 96.8205% ( 20) 00:21:44.334 23.738 - 23.855: 97.0359% ( 21) 00:21:44.334 23.855 - 23.971: 97.3026% ( 26) 00:21:44.334 23.971 - 24.087: 97.5282% ( 22) 00:21:44.334 24.087 - 24.204: 97.7538% ( 22) 00:21:44.334 24.204 - 24.320: 97.9179% ( 16) 00:21:44.334 24.320 - 24.436: 98.1026% ( 18) 00:21:44.334 24.436 - 24.553: 98.2769% ( 17) 00:21:44.334 24.553 - 24.669: 98.4000% ( 12) 00:21:44.334 24.669 - 24.786: 98.4923% ( 9) 00:21:44.334 24.786 - 24.902: 98.5436% ( 5) 00:21:44.334 24.902 - 25.018: 98.6256% ( 8) 00:21:44.334 25.018 - 25.135: 98.6462% ( 2) 00:21:44.334 25.135 - 25.251: 98.6769% ( 3) 00:21:44.334 25.251 - 25.367: 98.6974% ( 2) 00:21:44.334 25.367 - 25.484: 98.7385% ( 4) 00:21:44.334 25.484 - 25.600: 98.7795% ( 4) 00:21:44.334 25.600 - 25.716: 98.8410% ( 6) 00:21:44.334 25.716 - 25.833: 98.8821% ( 4) 00:21:44.334 25.949 - 26.066: 98.9333% ( 5) 00:21:44.334 26.066 - 26.182: 98.9538% ( 2) 00:21:44.334 26.182 - 26.298: 98.9744% ( 2) 00:21:44.334 26.298 - 26.415: 98.9846% ( 1) 00:21:44.334 26.415 - 26.531: 99.0051% ( 2) 00:21:44.334 26.531 - 26.647: 99.0359% ( 3) 00:21:44.334 26.647 - 26.764: 99.0564% ( 2) 00:21:44.334 26.764 - 26.880: 99.0769% ( 2) 00:21:44.334 26.880 - 26.996: 99.1077% ( 3) 00:21:44.334 26.996 - 27.113: 99.1385% ( 3) 00:21:44.334 27.113 - 27.229: 99.1487% ( 1) 00:21:44.334 27.229 - 27.346: 99.1692% ( 2) 00:21:44.334 27.346 - 27.462: 99.1795% ( 1) 00:21:44.334 27.462 - 27.578: 99.1897% ( 1) 00:21:44.334 27.578 - 27.695: 99.2205% ( 3) 00:21:44.334 27.695 - 27.811: 99.2410% ( 2) 00:21:44.334 27.811 - 27.927: 99.2615% ( 2) 00:21:44.334 28.044 - 28.160: 99.2821% ( 2) 00:21:44.334 28.160 - 28.276: 99.3128% ( 3) 00:21:44.334 28.276 - 28.393: 99.3231% ( 1) 00:21:44.334 28.393 - 28.509: 99.3538% ( 3) 00:21:44.334 28.509 - 28.626: 99.3949% ( 4) 00:21:44.334 28.626 - 28.742: 99.4154% ( 2) 00:21:44.334 28.742 - 28.858: 99.4564% ( 4) 00:21:44.334 28.975 - 29.091: 99.4872% ( 3) 00:21:44.334 29.091 - 29.207: 99.5282% ( 4) 00:21:44.334 29.207 - 29.324: 99.5487% ( 2) 00:21:44.334 29.324 - 29.440: 99.5897% ( 4) 00:21:44.334 29.440 - 29.556: 99.6103% ( 2) 00:21:44.334 29.556 - 29.673: 99.6308% ( 2) 00:21:44.334 29.673 - 29.789: 99.6513% ( 2) 00:21:44.334 29.789 - 30.022: 99.6718% ( 2) 00:21:44.334 30.022 - 30.255: 99.6821% ( 1) 00:21:44.334 30.255 - 30.487: 99.7231% ( 4) 00:21:44.334 30.487 - 30.720: 99.7641% ( 4) 00:21:44.334 30.720 - 30.953: 99.7744% ( 1) 00:21:44.334 30.953 - 31.186: 99.8051% ( 3) 00:21:44.334 31.186 - 31.418: 99.8154% ( 1) 00:21:44.334 31.418 - 31.651: 99.8256% ( 1) 00:21:44.334 31.651 - 31.884: 99.8462% ( 2) 00:21:44.334 32.582 - 32.815: 99.8564% ( 1) 00:21:44.334 34.211 - 34.444: 99.8667% ( 1) 00:21:44.334 34.444 - 34.676: 99.8872% ( 2) 00:21:44.334 35.840 - 36.073: 99.8974% ( 1) 00:21:44.334 36.771 - 37.004: 99.9077% ( 1) 00:21:44.334 37.702 - 37.935: 99.9179% ( 1) 00:21:44.334 37.935 - 38.167: 99.9385% ( 2) 00:21:44.334 38.167 - 38.400: 99.9590% ( 2) 00:21:44.334 38.400 - 38.633: 99.9692% ( 1) 00:21:44.334 38.866 - 39.098: 99.9795% ( 1) 00:21:44.334 44.218 - 44.451: 99.9897% ( 1) 00:21:44.334 49.571 - 49.804: 100.0000% ( 1) 00:21:44.334 00:21:44.334 Complete histogram 00:21:44.334 ================== 00:21:44.334 Range in us Cumulative Count 00:21:44.334 5.382 - 5.411: 0.0308% ( 3) 00:21:44.334 5.411 - 5.440: 0.0513% ( 2) 00:21:44.334 5.440 - 5.469: 0.0821% ( 3) 00:21:44.334 5.469 - 5.498: 0.1026% ( 2) 00:21:44.334 5.498 - 5.527: 0.1538% ( 5) 00:21:44.334 5.527 - 5.556: 0.2359% ( 8) 00:21:44.334 5.556 - 5.585: 0.4923% ( 25) 00:21:44.334 5.585 - 5.615: 0.7897% ( 29) 00:21:44.334 5.615 - 5.644: 0.9333% ( 14) 00:21:44.334 5.644 - 5.673: 1.0564% ( 12) 00:21:44.334 5.673 - 5.702: 1.3333% ( 27) 00:21:44.334 5.702 - 5.731: 2.0308% ( 68) 00:21:44.334 5.731 - 5.760: 2.9949% ( 94) 00:21:44.334 5.760 - 5.789: 4.0308% ( 101) 00:21:44.334 5.789 - 5.818: 4.8205% ( 77) 00:21:44.334 5.818 - 5.847: 5.3538% ( 52) 00:21:44.334 5.847 - 5.876: 5.8974% ( 53) 00:21:44.334 5.876 - 5.905: 6.7077% ( 79) 00:21:44.334 5.905 - 5.935: 8.0513% ( 131) 00:21:44.334 5.935 - 5.964: 9.2821% ( 120) 00:21:44.334 5.964 - 5.993: 10.4103% ( 110) 00:21:44.334 5.993 - 6.022: 11.3641% ( 93) 00:21:44.334 6.022 - 6.051: 12.6154% ( 122) 00:21:44.334 6.051 - 6.080: 14.8308% ( 216) 00:21:44.334 6.080 - 6.109: 19.1179% ( 418) 00:21:44.334 6.109 - 6.138: 23.9179% ( 468) 00:21:44.334 6.138 - 6.167: 28.5949% ( 456) 00:21:44.334 6.167 - 6.196: 33.2513% ( 454) 00:21:44.335 6.196 - 6.225: 37.2103% ( 386) 00:21:44.335 6.225 - 6.255: 40.5231% ( 323) 00:21:44.335 6.255 - 6.284: 43.2615% ( 267) 00:21:44.335 6.284 - 6.313: 45.1077% ( 180) 00:21:44.335 6.313 - 6.342: 46.6564% ( 151) 00:21:44.335 6.342 - 6.371: 47.7846% ( 110) 00:21:44.335 6.371 - 6.400: 49.5179% ( 169) 00:21:44.335 6.400 - 6.429: 52.1846% ( 260) 00:21:44.335 6.429 - 6.458: 54.8308% ( 258) 00:21:44.335 6.458 - 6.487: 56.6667% ( 179) 00:21:44.335 6.487 - 6.516: 58.2769% ( 157) 00:21:44.335 6.516 - 6.545: 59.2923% ( 99) 00:21:44.335 6.545 - 6.575: 60.3077% ( 99) 00:21:44.335 6.575 - 6.604: 61.6103% ( 127) 00:21:44.335 6.604 - 6.633: 63.2205% ( 157) 00:21:44.335 6.633 - 6.662: 64.9538% ( 169) 00:21:44.335 6.662 - 6.691: 66.4103% ( 142) 00:21:44.335 6.691 - 6.720: 67.4564% ( 102) 00:21:44.335 6.720 - 6.749: 68.3282% ( 85) 00:21:44.335 6.749 - 6.778: 69.0769% ( 73) 00:21:44.335 6.778 - 6.807: 69.8667% ( 77) 00:21:44.335 6.807 - 6.836: 70.4410% ( 56) 00:21:44.335 6.836 - 6.865: 70.9333% ( 48) 00:21:44.335 6.865 - 6.895: 71.3333% ( 39) 00:21:44.335 6.895 - 6.924: 71.8051% ( 46) 00:21:44.335 6.924 - 6.953: 72.1333% ( 32) 00:21:44.335 6.953 - 6.982: 72.5333% ( 39) 00:21:44.335 6.982 - 7.011: 72.7897% ( 25) 00:21:44.335 7.011 - 7.040: 73.0462% ( 25) 00:21:44.335 7.040 - 7.069: 73.2205% ( 17) 00:21:44.335 7.069 - 7.098: 73.3949% ( 17) 00:21:44.335 7.098 - 7.127: 73.4872% ( 9) 00:21:44.335 7.127 - 7.156: 73.6000% ( 11) 00:21:44.335 7.156 - 7.185: 73.7231% ( 12) 00:21:44.335 7.185 - 7.215: 73.8974% ( 17) 00:21:44.335 7.215 - 7.244: 74.0000% ( 10) 00:21:44.335 7.244 - 7.273: 74.0308% ( 3) 00:21:44.335 7.273 - 7.302: 74.0718% ( 4) 00:21:44.335 7.302 - 7.331: 74.1538% ( 8) 00:21:44.335 7.360 - 7.389: 74.1641% ( 1) 00:21:44.335 7.389 - 7.418: 74.1846% ( 2) 00:21:44.335 7.447 - 7.505: 74.2359% ( 5) 00:21:44.335 7.505 - 7.564: 74.3385% ( 10) 00:21:44.335 7.564 - 7.622: 74.4410% ( 10) 00:21:44.335 7.622 - 7.680: 75.5282% ( 106) 00:21:44.335 7.680 - 7.738: 76.6154% ( 106) 00:21:44.335 7.738 - 7.796: 77.1385% ( 51) 00:21:44.335 7.796 - 7.855: 77.4051% ( 26) 00:21:44.335 7.855 - 7.913: 77.5487% ( 14) 00:21:44.335 7.913 - 7.971: 77.6410% ( 9) 00:21:44.335 7.971 - 8.029: 77.6923% ( 5) 00:21:44.335 8.029 - 8.087: 77.7538% ( 6) 00:21:44.335 8.087 - 8.145: 77.7846% ( 3) 00:21:44.335 8.145 - 8.204: 78.5026% ( 70) 00:21:44.335 8.204 - 8.262: 80.9744% ( 241) 00:21:44.335 8.262 - 8.320: 82.6462% ( 163) 00:21:44.335 8.320 - 8.378: 83.8564% ( 118) 00:21:44.335 8.378 - 8.436: 84.5436% ( 67) 00:21:44.335 8.436 - 8.495: 84.7897% ( 24) 00:21:44.335 8.495 - 8.553: 85.0359% ( 24) 00:21:44.335 8.553 - 8.611: 85.2615% ( 22) 00:21:44.335 8.611 - 8.669: 85.3846% ( 12) 00:21:44.335 8.669 - 8.727: 85.4462% ( 6) 00:21:44.335 8.727 - 8.785: 85.5692% ( 12) 00:21:44.335 8.785 - 8.844: 85.6205% ( 5) 00:21:44.335 8.844 - 8.902: 85.6821% ( 6) 00:21:44.335 8.902 - 8.960: 85.7026% ( 2) 00:21:44.335 8.960 - 9.018: 85.7231% ( 2) 00:21:44.335 9.018 - 9.076: 85.7436% ( 2) 00:21:44.335 9.076 - 9.135: 85.7744% ( 3) 00:21:44.335 9.193 - 9.251: 85.7846% ( 1) 00:21:44.335 9.251 - 9.309: 85.8051% ( 2) 00:21:44.335 9.309 - 9.367: 85.8256% ( 2) 00:21:44.335 9.367 - 9.425: 85.8359% ( 1) 00:21:44.335 9.425 - 9.484: 85.8667% ( 3) 00:21:44.335 9.484 - 9.542: 85.8872% ( 2) 00:21:44.335 9.600 - 9.658: 85.9077% ( 2) 00:21:44.335 9.658 - 9.716: 85.9487% ( 4) 00:21:44.335 9.716 - 9.775: 86.0000% ( 5) 00:21:44.335 9.775 - 9.833: 86.0308% ( 3) 00:21:44.335 9.833 - 9.891: 86.0718% ( 4) 00:21:44.335 9.891 - 9.949: 86.1026% ( 3) 00:21:44.335 9.949 - 10.007: 86.1641% ( 6) 00:21:44.335 10.007 - 10.065: 86.2154% ( 5) 00:21:44.335 10.065 - 10.124: 86.2974% ( 8) 00:21:44.335 10.124 - 10.182: 86.3692% ( 7) 00:21:44.335 10.182 - 10.240: 86.4513% ( 8) 00:21:44.335 10.240 - 10.298: 86.4923% ( 4) 00:21:44.335 10.298 - 10.356: 86.6051% ( 11) 00:21:44.335 10.356 - 10.415: 86.6769% ( 7) 00:21:44.335 10.415 - 10.473: 86.8000% ( 12) 00:21:44.335 10.473 - 10.531: 86.8923% ( 9) 00:21:44.335 10.531 - 10.589: 87.0359% ( 14) 00:21:44.335 10.589 - 10.647: 87.1590% ( 12) 00:21:44.335 10.647 - 10.705: 87.3744% ( 21) 00:21:44.335 10.705 - 10.764: 87.4872% ( 11) 00:21:44.335 10.764 - 10.822: 87.5692% ( 8) 00:21:44.335 10.822 - 10.880: 87.7744% ( 20) 00:21:44.335 10.880 - 10.938: 87.9077% ( 13) 00:21:44.335 10.938 - 10.996: 88.1128% ( 20) 00:21:44.335 10.996 - 11.055: 88.2667% ( 15) 00:21:44.335 11.055 - 11.113: 88.4103% ( 14) 00:21:44.335 11.113 - 11.171: 88.5744% ( 16) 00:21:44.335 11.171 - 11.229: 88.6974% ( 12) 00:21:44.335 11.229 - 11.287: 88.8718% ( 17) 00:21:44.335 11.287 - 11.345: 89.0154% ( 14) 00:21:44.335 11.345 - 11.404: 89.1795% ( 16) 00:21:44.335 11.404 - 11.462: 89.3436% ( 16) 00:21:44.335 11.462 - 11.520: 89.5077% ( 16) 00:21:44.335 11.520 - 11.578: 89.6615% ( 15) 00:21:44.335 11.578 - 11.636: 89.7744% ( 11) 00:21:44.335 11.636 - 11.695: 89.8564% ( 8) 00:21:44.335 11.695 - 11.753: 89.9487% ( 9) 00:21:44.335 11.753 - 11.811: 90.0821% ( 13) 00:21:44.335 11.811 - 11.869: 90.2051% ( 12) 00:21:44.335 11.869 - 11.927: 90.3795% ( 17) 00:21:44.335 11.927 - 11.985: 90.5231% ( 14) 00:21:44.335 11.985 - 12.044: 90.5846% ( 6) 00:21:44.335 12.044 - 12.102: 90.7590% ( 17) 00:21:44.335 12.102 - 12.160: 90.8718% ( 11) 00:21:44.335 12.160 - 12.218: 91.0359% ( 16) 00:21:44.335 12.218 - 12.276: 91.1487% ( 11) 00:21:44.335 12.276 - 12.335: 91.2410% ( 9) 00:21:44.335 12.335 - 12.393: 91.3641% ( 12) 00:21:44.335 12.393 - 12.451: 91.4462% ( 8) 00:21:44.335 12.451 - 12.509: 91.5590% ( 11) 00:21:44.335 12.509 - 12.567: 91.6513% ( 9) 00:21:44.335 12.567 - 12.625: 91.7231% ( 7) 00:21:44.335 12.625 - 12.684: 91.7949% ( 7) 00:21:44.335 12.684 - 12.742: 91.8769% ( 8) 00:21:44.335 12.742 - 12.800: 91.9590% ( 8) 00:21:44.335 12.800 - 12.858: 92.0513% ( 9) 00:21:44.335 12.858 - 12.916: 92.0923% ( 4) 00:21:44.335 12.916 - 12.975: 92.1846% ( 9) 00:21:44.335 12.975 - 13.033: 92.2667% ( 8) 00:21:44.335 13.033 - 13.091: 92.3590% ( 9) 00:21:44.335 13.091 - 13.149: 92.4821% ( 12) 00:21:44.335 13.149 - 13.207: 92.5538% ( 7) 00:21:44.335 13.207 - 13.265: 92.6359% ( 8) 00:21:44.335 13.265 - 13.324: 92.7077% ( 7) 00:21:44.335 13.324 - 13.382: 92.7385% ( 3) 00:21:44.335 13.382 - 13.440: 92.8410% ( 10) 00:21:44.335 13.440 - 13.498: 92.9231% ( 8) 00:21:44.335 13.498 - 13.556: 93.0154% ( 9) 00:21:44.335 13.556 - 13.615: 93.0769% ( 6) 00:21:44.335 13.615 - 13.673: 93.1077% ( 3) 00:21:44.335 13.673 - 13.731: 93.1487% ( 4) 00:21:44.335 13.731 - 13.789: 93.2103% ( 6) 00:21:44.335 13.789 - 13.847: 93.2513% ( 4) 00:21:44.335 13.847 - 13.905: 93.2923% ( 4) 00:21:44.335 13.905 - 13.964: 93.3641% ( 7) 00:21:44.335 13.964 - 14.022: 93.4256% ( 6) 00:21:44.335 14.022 - 14.080: 93.4667% ( 4) 00:21:44.335 14.080 - 14.138: 93.4974% ( 3) 00:21:44.335 14.138 - 14.196: 93.5179% ( 2) 00:21:44.335 14.196 - 14.255: 93.5385% ( 2) 00:21:44.335 14.255 - 14.313: 93.5897% ( 5) 00:21:44.335 14.313 - 14.371: 93.6000% ( 1) 00:21:44.335 14.371 - 14.429: 93.6308% ( 3) 00:21:44.335 14.429 - 14.487: 93.6718% ( 4) 00:21:44.335 14.487 - 14.545: 93.7128% ( 4) 00:21:44.335 14.545 - 14.604: 93.7436% ( 3) 00:21:44.335 14.604 - 14.662: 93.7641% ( 2) 00:21:44.335 14.662 - 14.720: 93.7949% ( 3) 00:21:44.335 14.720 - 14.778: 93.8256% ( 3) 00:21:44.335 14.778 - 14.836: 93.8564% ( 3) 00:21:44.335 14.836 - 14.895: 93.8974% ( 4) 00:21:44.335 14.895 - 15.011: 93.9487% ( 5) 00:21:44.335 15.011 - 15.127: 93.9795% ( 3) 00:21:44.335 15.127 - 15.244: 93.9897% ( 1) 00:21:44.335 15.244 - 15.360: 94.0205% ( 3) 00:21:44.335 15.360 - 15.476: 94.0513% ( 3) 00:21:44.335 15.476 - 15.593: 94.1026% ( 5) 00:21:44.335 15.593 - 15.709: 94.1538% ( 5) 00:21:44.335 15.709 - 15.825: 94.1949% ( 4) 00:21:44.335 15.825 - 15.942: 94.2256% ( 3) 00:21:44.335 15.942 - 16.058: 94.2667% ( 4) 00:21:44.335 16.058 - 16.175: 94.2769% ( 1) 00:21:44.335 16.175 - 16.291: 94.2872% ( 1) 00:21:44.336 16.291 - 16.407: 94.2974% ( 1) 00:21:44.336 16.524 - 16.640: 94.3077% ( 1) 00:21:44.336 16.640 - 16.756: 94.3590% ( 5) 00:21:44.336 16.756 - 16.873: 94.3795% ( 2) 00:21:44.336 16.873 - 16.989: 94.4103% ( 3) 00:21:44.336 16.989 - 17.105: 94.4308% ( 2) 00:21:44.336 17.222 - 17.338: 94.4615% ( 3) 00:21:44.336 17.455 - 17.571: 94.4821% ( 2) 00:21:44.336 17.571 - 17.687: 94.4923% ( 1) 00:21:44.336 17.687 - 17.804: 94.5026% ( 1) 00:21:44.336 17.804 - 17.920: 94.5231% ( 2) 00:21:44.336 17.920 - 18.036: 94.5538% ( 3) 00:21:44.336 18.036 - 18.153: 94.5641% ( 1) 00:21:44.336 18.153 - 18.269: 94.5846% ( 2) 00:21:44.336 18.269 - 18.386: 94.6359% ( 5) 00:21:44.336 18.386 - 18.502: 94.6564% ( 2) 00:21:44.336 18.502 - 18.618: 94.6769% ( 2) 00:21:44.336 18.618 - 18.735: 94.6872% ( 1) 00:21:44.336 18.735 - 18.851: 94.7077% ( 2) 00:21:44.336 18.967 - 19.084: 94.7179% ( 1) 00:21:44.336 19.084 - 19.200: 94.7385% ( 2) 00:21:44.336 19.200 - 19.316: 94.7487% ( 1) 00:21:44.336 19.316 - 19.433: 94.7590% ( 1) 00:21:44.336 19.433 - 19.549: 94.7692% ( 1) 00:21:44.336 19.549 - 19.666: 94.7795% ( 1) 00:21:44.336 19.782 - 19.898: 94.7897% ( 1) 00:21:44.336 19.898 - 20.015: 94.8000% ( 1) 00:21:44.336 20.015 - 20.131: 94.8205% ( 2) 00:21:44.336 20.131 - 20.247: 94.8513% ( 3) 00:21:44.336 20.247 - 20.364: 94.9128% ( 6) 00:21:44.336 20.364 - 20.480: 94.9436% ( 3) 00:21:44.336 20.480 - 20.596: 95.0564% ( 11) 00:21:44.336 20.596 - 20.713: 95.1692% ( 11) 00:21:44.336 20.713 - 20.829: 95.2821% ( 11) 00:21:44.336 20.829 - 20.946: 95.5590% ( 27) 00:21:44.336 20.946 - 21.062: 95.8051% ( 24) 00:21:44.336 21.062 - 21.178: 96.2051% ( 39) 00:21:44.336 21.178 - 21.295: 96.5949% ( 38) 00:21:44.336 21.295 - 21.411: 96.8923% ( 29) 00:21:44.336 21.411 - 21.527: 97.2513% ( 35) 00:21:44.336 21.527 - 21.644: 97.5282% ( 27) 00:21:44.336 21.644 - 21.760: 97.7231% ( 19) 00:21:44.336 21.760 - 21.876: 97.8359% ( 11) 00:21:44.336 21.876 - 21.993: 97.9282% ( 9) 00:21:44.336 21.993 - 22.109: 97.9795% ( 5) 00:21:44.336 22.109 - 22.226: 98.0615% ( 8) 00:21:44.336 22.226 - 22.342: 98.1333% ( 7) 00:21:44.336 22.342 - 22.458: 98.1641% ( 3) 00:21:44.336 22.458 - 22.575: 98.2154% ( 5) 00:21:44.336 22.575 - 22.691: 98.2667% ( 5) 00:21:44.336 22.691 - 22.807: 98.3077% ( 4) 00:21:44.336 22.807 - 22.924: 98.3282% ( 2) 00:21:44.336 22.924 - 23.040: 98.3795% ( 5) 00:21:44.336 23.040 - 23.156: 98.4000% ( 2) 00:21:44.336 23.156 - 23.273: 98.4103% ( 1) 00:21:44.336 23.273 - 23.389: 98.4410% ( 3) 00:21:44.336 23.389 - 23.506: 98.4615% ( 2) 00:21:44.336 23.506 - 23.622: 98.5026% ( 4) 00:21:44.336 23.622 - 23.738: 98.5333% ( 3) 00:21:44.336 23.971 - 24.087: 98.5436% ( 1) 00:21:44.336 24.087 - 24.204: 98.5538% ( 1) 00:21:44.336 24.320 - 24.436: 98.5641% ( 1) 00:21:44.336 24.553 - 24.669: 98.5846% ( 2) 00:21:44.336 24.669 - 24.786: 98.6051% ( 2) 00:21:44.336 24.902 - 25.018: 98.6256% ( 2) 00:21:44.336 25.018 - 25.135: 98.6564% ( 3) 00:21:44.336 25.135 - 25.251: 98.7077% ( 5) 00:21:44.336 25.251 - 25.367: 98.7487% ( 4) 00:21:44.336 25.367 - 25.484: 98.7590% ( 1) 00:21:44.336 25.484 - 25.600: 98.8000% ( 4) 00:21:44.336 25.600 - 25.716: 98.8718% ( 7) 00:21:44.336 25.833 - 25.949: 98.9026% ( 3) 00:21:44.336 25.949 - 26.066: 98.9128% ( 1) 00:21:44.336 26.066 - 26.182: 98.9641% ( 5) 00:21:44.336 26.182 - 26.298: 99.0051% ( 4) 00:21:44.336 26.298 - 26.415: 99.0564% ( 5) 00:21:44.336 26.415 - 26.531: 99.1077% ( 5) 00:21:44.336 26.531 - 26.647: 99.1487% ( 4) 00:21:44.336 26.647 - 26.764: 99.1897% ( 4) 00:21:44.336 26.764 - 26.880: 99.2410% ( 5) 00:21:44.336 26.880 - 26.996: 99.2718% ( 3) 00:21:44.336 26.996 - 27.113: 99.3231% ( 5) 00:21:44.336 27.113 - 27.229: 99.3538% ( 3) 00:21:44.336 27.229 - 27.346: 99.3846% ( 3) 00:21:44.336 27.346 - 27.462: 99.4256% ( 4) 00:21:44.336 27.578 - 27.695: 99.4462% ( 2) 00:21:44.336 27.695 - 27.811: 99.4564% ( 1) 00:21:44.336 27.811 - 27.927: 99.4667% ( 1) 00:21:44.336 27.927 - 28.044: 99.4769% ( 1) 00:21:44.336 28.044 - 28.160: 99.4872% ( 1) 00:21:44.336 28.160 - 28.276: 99.5179% ( 3) 00:21:44.336 28.276 - 28.393: 99.5590% ( 4) 00:21:44.336 28.393 - 28.509: 99.5795% ( 2) 00:21:44.336 28.509 - 28.626: 99.5897% ( 1) 00:21:44.336 28.626 - 28.742: 99.6103% ( 2) 00:21:44.336 28.975 - 29.091: 99.6308% ( 2) 00:21:44.336 29.091 - 29.207: 99.6410% ( 1) 00:21:44.336 29.324 - 29.440: 99.6718% ( 3) 00:21:44.336 29.440 - 29.556: 99.6821% ( 1) 00:21:44.336 29.556 - 29.673: 99.6923% ( 1) 00:21:44.336 29.673 - 29.789: 99.7026% ( 1) 00:21:44.336 29.789 - 30.022: 99.7128% ( 1) 00:21:44.336 30.022 - 30.255: 99.7436% ( 3) 00:21:44.336 30.255 - 30.487: 99.7538% ( 1) 00:21:44.336 31.186 - 31.418: 99.7641% ( 1) 00:21:44.336 31.651 - 31.884: 99.7744% ( 1) 00:21:44.336 31.884 - 32.116: 99.7846% ( 1) 00:21:44.336 32.815 - 33.047: 99.7949% ( 1) 00:21:44.336 33.978 - 34.211: 99.8051% ( 1) 00:21:44.336 34.444 - 34.676: 99.8154% ( 1) 00:21:44.336 34.676 - 34.909: 99.8256% ( 1) 00:21:44.336 35.142 - 35.375: 99.8359% ( 1) 00:21:44.336 36.306 - 36.538: 99.8462% ( 1) 00:21:44.336 36.538 - 36.771: 99.8564% ( 1) 00:21:44.336 37.236 - 37.469: 99.8769% ( 2) 00:21:44.336 37.702 - 37.935: 99.8974% ( 2) 00:21:44.336 37.935 - 38.167: 99.9179% ( 2) 00:21:44.336 38.633 - 38.866: 99.9282% ( 1) 00:21:44.336 39.098 - 39.331: 99.9385% ( 1) 00:21:44.336 39.331 - 39.564: 99.9487% ( 1) 00:21:44.336 39.564 - 39.796: 99.9590% ( 1) 00:21:44.336 40.727 - 40.960: 99.9692% ( 1) 00:21:44.336 41.193 - 41.426: 99.9795% ( 1) 00:21:44.336 50.967 - 51.200: 99.9897% ( 1) 00:21:44.336 103.331 - 103.797: 100.0000% ( 1) 00:21:44.336 00:21:44.336 00:21:44.336 real 0m1.623s 00:21:44.336 user 0m1.023s 00:21:44.336 sys 0m0.599s 00:21:44.336 06:34:56 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.336 06:34:56 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:21:44.336 ************************************ 00:21:44.336 END TEST nvme_overhead 00:21:44.336 ************************************ 00:21:44.336 06:34:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:44.336 06:34:56 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:44.336 06:34:56 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:21:44.336 06:34:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.336 06:34:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:44.336 ************************************ 00:21:44.336 START TEST nvme_arbitration 00:21:44.336 ************************************ 00:21:44.336 06:34:56 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:44.902 EAL: TSC is not safe to use in SMP mode 00:21:44.902 EAL: TSC is not invariant 00:21:44.902 [2024-07-23 06:34:57.338167] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:48.207 Initializing NVMe Controllers 00:21:48.207 Attaching to 0000:00:10.0 00:21:48.207 Attached to 0000:00:10.0 00:21:48.207 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:21:48.207 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:21:48.207 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:21:48.207 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:21:48.207 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:21:48.207 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:21:48.207 Initialization complete. Launching workers. 00:21:48.207 Starting thread on core 1 with urgent priority queue 00:21:48.207 Starting thread on core 2 with urgent priority queue 00:21:48.207 Starting thread on core 3 with urgent priority queue 00:21:48.207 Starting thread on core 0 with urgent priority queue 00:21:48.207 QEMU NVMe Ctrl (12340 ) core 0: 6194.67 IO/s 16.14 secs/100000 ios 00:21:48.207 QEMU NVMe Ctrl (12340 ) core 1: 6143.00 IO/s 16.28 secs/100000 ios 00:21:48.207 QEMU NVMe Ctrl (12340 ) core 2: 6268.67 IO/s 15.95 secs/100000 ios 00:21:48.207 QEMU NVMe Ctrl (12340 ) core 3: 6194.00 IO/s 16.14 secs/100000 ios 00:21:48.207 ======================================================== 00:21:48.207 00:21:48.207 00:21:48.207 real 0m3.768s 00:21:48.207 user 0m12.208s 00:21:48.207 sys 0m0.590s 00:21:48.207 06:35:00 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.207 06:35:00 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:21:48.207 ************************************ 00:21:48.207 END TEST nvme_arbitration 00:21:48.207 ************************************ 00:21:48.207 06:35:00 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:48.207 06:35:00 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:48.207 06:35:00 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:48.207 06:35:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.207 06:35:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.207 ************************************ 00:21:48.207 START TEST nvme_single_aen 00:21:48.207 ************************************ 00:21:48.207 06:35:00 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:48.774 EAL: TSC is not safe to use in SMP mode 00:21:48.774 EAL: TSC is not invariant 00:21:48.774 [2024-07-23 06:35:01.146934] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:48.774 Asynchronous Event Request test 00:21:48.774 Attaching to 0000:00:10.0 00:21:48.774 Attached to 0000:00:10.0 00:21:48.774 Reset controller to setup AER completions for this process 00:21:48.774 Registering asynchronous event callbacks... 00:21:48.774 Getting orig temperature thresholds of all controllers 00:21:48.774 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:48.774 Setting all controllers temperature threshold low to trigger AER 00:21:48.774 Waiting for all controllers temperature threshold to be set lower 00:21:48.774 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:48.774 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:21:48.774 Waiting for all controllers to trigger AER and reset threshold 00:21:48.774 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:48.774 Cleaning up... 00:21:48.774 00:21:48.774 real 0m0.600s 00:21:48.774 user 0m0.003s 00:21:48.774 sys 0m0.596s 00:21:48.774 06:35:01 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.774 06:35:01 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:21:48.774 ************************************ 00:21:48.774 END TEST nvme_single_aen 00:21:48.774 ************************************ 00:21:48.774 06:35:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:48.774 06:35:01 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:21:48.774 06:35:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:48.774 06:35:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.774 06:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.774 ************************************ 00:21:48.774 START TEST nvme_doorbell_aers 00:21:48.774 ************************************ 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:48.774 06:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:49.342 EAL: TSC is not safe to use in SMP mode 00:21:49.342 EAL: TSC is not invariant 00:21:49.342 [2024-07-23 06:35:01.823561] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:49.342 Executing: test_write_invalid_db 00:21:49.342 Waiting for AER completion... 00:21:49.342 Asynchronous Event received. 00:21:49.342 Error Informaton Log Page received. 00:21:49.342 Success: test_write_invalid_db 00:21:49.342 00:21:49.342 Executing: test_invalid_db_write_overflow_sq 00:21:49.342 Waiting for AER completion... 00:21:49.342 Asynchronous Event received. 00:21:49.342 Error Informaton Log Page received. 00:21:49.342 Success: test_invalid_db_write_overflow_sq 00:21:49.342 00:21:49.342 Executing: test_invalid_db_write_overflow_cq 00:21:49.342 Waiting for AER completion... 00:21:49.342 Asynchronous Event received. 00:21:49.342 Error Informaton Log Page received. 00:21:49.342 Success: test_invalid_db_write_overflow_cq 00:21:49.342 00:21:49.342 00:21:49.342 real 0m0.625s 00:21:49.342 user 0m0.014s 00:21:49.342 sys 0m0.626s 00:21:49.342 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.342 06:35:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 ************************************ 00:21:49.342 END TEST nvme_doorbell_aers 00:21:49.342 ************************************ 00:21:49.601 06:35:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:49.601 06:35:01 nvme -- nvme/nvme.sh@97 -- # uname 00:21:49.601 06:35:01 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:21:49.601 06:35:01 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:21:49.601 06:35:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:49.601 06:35:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.601 06:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:49.601 ************************************ 00:21:49.601 START TEST bdev_nvme_reset_stuck_adm_cmd 00:21:49.601 ************************************ 00:21:49.601 06:35:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:21:49.601 * Looking for test storage... 00:21:49.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69095 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69095 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 69095 ']' 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.601 06:35:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:49.601 [2024-07-23 06:35:02.100010] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:49.601 [2024-07-23 06:35:02.100154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:50.168 EAL: TSC is not safe to use in SMP mode 00:21:50.168 EAL: TSC is not invariant 00:21:50.168 [2024-07-23 06:35:02.649276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.426 [2024-07-23 06:35:02.737510] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:50.426 [2024-07-23 06:35:02.737592] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:21:50.426 [2024-07-23 06:35:02.737618] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:21:50.426 [2024-07-23 06:35:02.737626] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:21:50.426 [2024-07-23 06:35:02.741622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.426 [2024-07-23 06:35:02.741865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.426 [2024-07-23 06:35:02.741748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.426 [2024-07-23 06:35:02.741860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:50.685 [2024-07-23 06:35:03.134205] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:50.685 nvme0n1 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:50.685 true 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:21:50.685 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721716503 00:21:50.951 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69107 00:21:50.951 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:50.951 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:21:50.951 06:35:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:52.858 [2024-07-23 06:35:05.310454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:21:52.858 [2024-07-23 06:35:05.310623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.858 [2024-07-23 06:35:05.310641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:52.858 [2024-07-23 06:35:05.310651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.858 [2024-07-23 06:35:05.311707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.858 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69107 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69107 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69107 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.6NbQ0x 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.55d5AF 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69095 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 69095 ']' 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 69095 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 69095 00:21:52.858 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:21:53.116 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:21:53.116 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:21:53.117 killing process with pid 69095 00:21:53.117 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69095' 00:21:53.117 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 69095 00:21:53.117 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 69095 00:21:53.375 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:21:53.375 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:21:53.375 00:21:53.375 real 0m3.755s 00:21:53.375 user 0m12.218s 00:21:53.375 sys 0m0.810s 00:21:53.375 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.375 06:35:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 ************************************ 00:21:53.375 END TEST bdev_nvme_reset_stuck_adm_cmd 00:21:53.375 ************************************ 00:21:53.375 06:35:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:53.375 06:35:05 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:21:53.375 06:35:05 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:21:53.375 06:35:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:53.375 06:35:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.375 06:35:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 ************************************ 00:21:53.375 START TEST nvme_fio 00:21:53.375 ************************************ 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:21:53.375 06:35:05 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:53.375 06:35:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:21:53.942 EAL: TSC is not safe to use in SMP mode 00:21:53.942 EAL: TSC is not invariant 00:21:53.942 [2024-07-23 06:35:06.335028] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:53.942 06:35:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:53.942 06:35:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:21:54.510 EAL: TSC is not safe to use in SMP mode 00:21:54.510 EAL: TSC is not invariant 00:21:54.510 [2024-07-23 06:35:06.916571] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:54.510 06:35:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:21:54.510 06:35:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:54.510 06:35:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:54.510 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:54.510 fio-3.35 00:21:54.768 Starting 1 thread 00:21:55.335 EAL: TSC is not safe to use in SMP mode 00:21:55.335 EAL: TSC is not invariant 00:21:55.335 [2024-07-23 06:35:07.615357] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:21:57.910 00:21:57.910 test: (groupid=0, jobs=1): err= 0: pid=101537: Tue Jul 23 06:35:10 2024 00:21:57.910 read: IOPS=43.9k, BW=171MiB/s (180MB/s)(343MiB/2001msec) 00:21:57.910 slat (nsec): min=402, max=37931, avg=598.26, stdev=819.71 00:21:57.910 clat (usec): min=291, max=3548, avg=1458.56, stdev=293.31 00:21:57.910 lat (usec): min=292, max=3578, avg=1459.16, stdev=293.34 00:21:57.910 clat percentiles (usec): 00:21:57.910 | 1.00th=[ 506], 5.00th=[ 1123], 10.00th=[ 1188], 20.00th=[ 1270], 00:21:57.910 | 30.00th=[ 1336], 40.00th=[ 1385], 50.00th=[ 1434], 60.00th=[ 1483], 00:21:57.910 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1778], 95.00th=[ 1942], 00:21:57.910 | 99.00th=[ 2376], 99.50th=[ 2573], 99.90th=[ 3064], 99.95th=[ 3228], 00:21:57.910 | 99.99th=[ 3490] 00:21:57.910 bw ( KiB/s): min=167792, max=188151, per=100.00%, avg=177706.33, stdev=10189.86, samples=3 00:21:57.910 iops : min=41948, max=47037, avg=44426.33, stdev=2547.08, samples=3 00:21:57.910 write: IOPS=43.8k, BW=171MiB/s (179MB/s)(342MiB/2001msec); 0 zone resets 00:21:57.910 slat (nsec): min=435, max=32382, avg=840.21, stdev=1304.82 00:21:57.910 clat (usec): min=276, max=3569, avg=1459.38, stdev=295.94 00:21:57.910 lat (usec): min=277, max=3570, avg=1460.22, stdev=295.95 00:21:57.910 clat percentiles (usec): 00:21:57.910 | 1.00th=[ 502], 5.00th=[ 1123], 10.00th=[ 1188], 20.00th=[ 1270], 00:21:57.910 | 30.00th=[ 1336], 40.00th=[ 1385], 50.00th=[ 1434], 60.00th=[ 1483], 00:21:57.910 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1778], 95.00th=[ 1942], 00:21:57.910 | 99.00th=[ 2409], 99.50th=[ 2606], 99.90th=[ 3130], 99.95th=[ 3294], 00:21:57.910 | 99.99th=[ 3490] 00:21:57.910 bw ( KiB/s): min=167776, max=188135, per=100.00%, avg=176847.67, stdev=10358.77, samples=3 00:21:57.910 iops : min=41944, max=47033, avg=44211.67, stdev=2589.28, samples=3 00:21:57.910 lat (usec) : 500=0.97%, 750=1.33%, 1000=0.59% 00:21:57.910 lat (msec) : 2=93.25%, 4=3.86% 00:21:57.910 cpu : usr=100.05%, sys=0.00%, ctx=22, majf=0, minf=2 00:21:57.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:57.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:57.910 issued rwts: total=87788,87547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:57.911 00:21:57.911 Run status group 0 (all jobs): 00:21:57.911 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=343MiB (360MB), run=2001-2001msec 00:21:57.911 WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=342MiB (359MB), run=2001-2001msec 00:21:58.477 06:35:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:21:58.477 06:35:10 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:21:58.477 ************************************ 00:21:58.477 END TEST nvme_fio 00:21:58.477 ************************************ 00:21:58.477 00:21:58.477 real 0m5.127s 00:21:58.477 user 0m2.345s 00:21:58.477 sys 0m2.713s 00:21:58.477 06:35:10 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.477 06:35:10 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:21:58.477 06:35:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:21:58.477 00:21:58.477 real 0m25.144s 00:21:58.477 user 0m30.768s 00:21:58.477 sys 0m12.460s 00:21:58.477 06:35:10 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.477 06:35:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:58.477 ************************************ 00:21:58.477 END TEST nvme 00:21:58.477 ************************************ 00:21:58.477 06:35:10 -- common/autotest_common.sh@1142 -- # return 0 00:21:58.477 06:35:10 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:21:58.477 06:35:10 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:21:58.477 06:35:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:58.477 06:35:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.477 06:35:10 -- common/autotest_common.sh@10 -- # set +x 00:21:58.477 ************************************ 00:21:58.477 START TEST nvme_scc 00:21:58.477 ************************************ 00:21:58.477 06:35:10 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:21:58.735 * Looking for test storage... 00:21:58.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:58.735 06:35:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:21:58.735 06:35:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:21:58.735 06:35:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:21:58.735 06:35:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:58.735 06:35:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.735 06:35:11 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.735 06:35:11 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.735 06:35:11 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.736 06:35:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:21:58.736 06:35:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:21:58.736 06:35:11 nvme_scc -- paths/export.sh@4 -- # export PATH 00:21:58.736 06:35:11 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:21:58.736 06:35:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:21:58.736 06:35:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.736 06:35:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:21:58.736 06:35:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:21:58.736 06:35:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:21:58.736 00:21:58.736 real 0m0.161s 00:21:58.736 user 0m0.116s 00:21:58.736 sys 0m0.119s 00:21:58.736 06:35:11 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.736 06:35:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:21:58.736 ************************************ 00:21:58.736 END TEST nvme_scc 00:21:58.736 ************************************ 00:21:58.736 06:35:11 -- common/autotest_common.sh@1142 -- # return 0 00:21:58.736 06:35:11 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:21:58.736 06:35:11 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:21:58.736 06:35:11 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:21:58.736 06:35:11 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:21:58.736 06:35:11 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:21:58.736 06:35:11 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:21:58.736 06:35:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:58.736 06:35:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.736 06:35:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.736 ************************************ 00:21:58.736 START TEST nvme_rpc 00:21:58.736 ************************************ 00:21:58.736 06:35:11 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:21:58.995 * Looking for test storage... 00:21:58.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=69349 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:21:58.995 06:35:11 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 69349 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 69349 ']' 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.995 06:35:11 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:58.995 [2024-07-23 06:35:11.336041] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:58.995 [2024-07-23 06:35:11.336214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:59.562 EAL: TSC is not safe to use in SMP mode 00:21:59.562 EAL: TSC is not invariant 00:21:59.562 [2024-07-23 06:35:11.888352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:59.562 [2024-07-23 06:35:11.985360] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:59.562 [2024-07-23 06:35:11.985432] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:21:59.562 [2024-07-23 06:35:11.988892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.562 [2024-07-23 06:35:11.988881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.135 06:35:12 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.135 06:35:12 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:22:00.135 06:35:12 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:00.392 [2024-07-23 06:35:12.676055] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:22:00.392 Nvme0n1 00:22:00.392 06:35:12 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:22:00.392 06:35:12 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:22:00.650 request: 00:22:00.650 { 00:22:00.650 "bdev_name": "Nvme0n1", 00:22:00.650 "filename": "non_existing_file", 00:22:00.650 "method": "bdev_nvme_apply_firmware", 00:22:00.650 "req_id": 1 00:22:00.650 } 00:22:00.650 Got JSON-RPC error response 00:22:00.650 response: 00:22:00.650 { 00:22:00.650 "code": -32603, 00:22:00.650 "message": "open file failed." 00:22:00.650 } 00:22:00.650 06:35:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:22:00.650 06:35:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:22:00.650 06:35:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:00.907 06:35:13 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:22:00.907 06:35:13 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 69349 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 69349 ']' 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 69349 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 69349 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:22:00.907 killing process with pid 69349 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69349' 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@967 -- # kill 69349 00:22:00.907 06:35:13 nvme_rpc -- common/autotest_common.sh@972 -- # wait 69349 00:22:01.165 00:22:01.165 real 0m2.398s 00:22:01.165 user 0m4.375s 00:22:01.165 sys 0m0.850s 00:22:01.165 06:35:13 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.165 06:35:13 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:01.165 ************************************ 00:22:01.165 END TEST nvme_rpc 00:22:01.165 ************************************ 00:22:01.165 06:35:13 -- common/autotest_common.sh@1142 -- # return 0 00:22:01.165 06:35:13 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:22:01.165 06:35:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:01.165 06:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.165 06:35:13 -- common/autotest_common.sh@10 -- # set +x 00:22:01.165 ************************************ 00:22:01.165 START TEST nvme_rpc_timeouts 00:22:01.165 ************************************ 00:22:01.165 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:22:01.423 * Looking for test storage... 00:22:01.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_69386 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_69386 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69414 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:22:01.423 06:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69414 00:22:01.423 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 69414 ']' 00:22:01.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.423 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.423 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.423 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.423 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.424 06:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:22:01.424 [2024-07-23 06:35:13.732908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:01.424 [2024-07-23 06:35:13.733088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:22:01.990 EAL: TSC is not safe to use in SMP mode 00:22:01.990 EAL: TSC is not invariant 00:22:01.990 [2024-07-23 06:35:14.279561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:01.990 [2024-07-23 06:35:14.358719] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:01.990 [2024-07-23 06:35:14.358800] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:22:01.990 [2024-07-23 06:35:14.361861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.990 [2024-07-23 06:35:14.361841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.555 06:35:14 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.555 06:35:14 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:22:02.555 Checking default timeout settings: 00:22:02.555 06:35:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:22:02.555 06:35:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:02.813 Making settings changes with rpc: 00:22:02.813 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:22:02.813 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:22:02.813 Check default vs. modified settings: 00:22:02.813 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:22:02.813 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:22:03.380 Setting action_on_timeout is changed as expected. 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:22:03.380 Setting timeout_us is changed as expected. 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:22:03.380 Setting timeout_admin_us is changed as expected. 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_69386 /tmp/settings_modified_69386 00:22:03.380 06:35:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69414 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 69414 ']' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 69414 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 69414 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:22:03.380 killing process with pid 69414 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69414' 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 69414 00:22:03.380 06:35:15 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 69414 00:22:03.638 RPC TIMEOUT SETTING TEST PASSED. 00:22:03.638 06:35:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:22:03.638 00:22:03.638 real 0m2.446s 00:22:03.638 user 0m4.508s 00:22:03.638 sys 0m0.871s 00:22:03.638 06:35:16 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.638 06:35:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:22:03.638 ************************************ 00:22:03.638 END TEST nvme_rpc_timeouts 00:22:03.638 ************************************ 00:22:03.638 06:35:16 -- common/autotest_common.sh@1142 -- # return 0 00:22:03.638 06:35:16 -- spdk/autotest.sh@243 -- # uname -s 00:22:03.638 06:35:16 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:22:03.639 06:35:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:03.639 06:35:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.639 06:35:16 -- common/autotest_common.sh@10 -- # set +x 00:22:03.639 06:35:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:03.639 06:35:16 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:03.639 06:35:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:03.639 06:35:16 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:03.639 06:35:16 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:03.639 06:35:16 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:03.639 06:35:16 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:03.639 06:35:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.639 06:35:16 -- common/autotest_common.sh@10 -- # set +x 00:22:03.639 06:35:16 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:03.639 06:35:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:03.639 06:35:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:03.639 06:35:16 -- common/autotest_common.sh@10 -- # set +x 00:22:04.205 setup.sh cleanup function not yet supported on FreeBSD 00:22:04.205 06:35:16 -- common/autotest_common.sh@1451 -- # return 0 00:22:04.205 06:35:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:04.205 06:35:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.205 06:35:16 -- common/autotest_common.sh@10 -- # set +x 00:22:04.463 06:35:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:04.463 06:35:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.463 06:35:16 -- common/autotest_common.sh@10 -- # set +x 00:22:04.463 06:35:16 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:04.463 06:35:16 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:04.463 06:35:16 -- spdk/autotest.sh@391 -- # hash lcov 00:22:04.463 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:22:04.463 06:35:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.463 06:35:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:04.463 06:35:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.463 06:35:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.463 06:35:16 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:22:04.463 06:35:16 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:22:04.463 06:35:16 -- paths/export.sh@4 -- $ export PATH 00:22:04.463 06:35:16 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:22:04.463 06:35:16 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:04.463 06:35:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:22:04.463 06:35:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721716516.XXXXXX 00:22:04.463 06:35:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721716516.XXXXXX.hFNaO7oTUW 00:22:04.463 06:35:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:22:04.463 06:35:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:22:04.463 06:35:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:04.463 06:35:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:04.463 06:35:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:04.463 06:35:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:22:04.463 06:35:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:04.463 06:35:16 -- common/autotest_common.sh@10 -- $ set +x 00:22:04.721 06:35:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:22:04.721 06:35:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:22:04.721 06:35:17 -- pm/common@17 -- $ local monitor 00:22:04.721 06:35:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:04.721 06:35:17 -- pm/common@25 -- $ sleep 1 00:22:04.721 06:35:17 -- pm/common@21 -- $ date +%s 00:22:04.722 06:35:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721716517 00:22:04.722 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721716517_collect-vmstat.pm.log 00:22:05.695 06:35:18 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:22:05.695 06:35:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:05.695 06:35:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:05.695 06:35:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:05.695 06:35:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:05.695 06:35:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:05.695 06:35:18 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:22:05.695 06:35:18 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:22:05.695 06:35:18 -- common/autotest_common.sh@10 -- $ set +x 00:22:05.695 06:35:18 -- spdk/autopackage.sh@26 -- $ [[ /usr/bin/clang == *clang* ]] 00:22:05.695 06:35:18 -- spdk/autopackage.sh@27 -- $ nproc 00:22:05.695 06:35:18 -- spdk/autopackage.sh@27 -- $ jobs=5 00:22:05.695 06:35:18 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:22:05.695 06:35:18 -- spdk/autopackage.sh@28 -- $ uname -s 00:22:05.695 06:35:18 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:22:05.695 06:35:18 -- spdk/autopackage.sh@32 -- $ export LD=ld.lld 00:22:05.695 06:35:18 -- spdk/autopackage.sh@32 -- $ LD=ld.lld 00:22:05.695 06:35:18 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:22:05.695 06:35:18 -- spdk/autopackage.sh@40 -- $ get_config_params 00:22:05.695 06:35:18 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:22:05.695 06:35:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:05.695 06:35:18 -- common/autotest_common.sh@10 -- $ set +x 00:22:05.695 06:35:18 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:22:05.695 06:35:18 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-lto --disable-unit-tests 00:22:05.953 Notice: Vhost, rte_vhost library, virtio, and fuse 00:22:05.953 are only supported on Linux. Turning off default feature. 00:22:05.953 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:05.953 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:06.209 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:22:06.209 Using 'verbs' RDMA provider 00:22:14.632 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:22:24.598 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:22:24.598 Creating mk/config.mk...done. 00:22:24.598 Creating mk/cc.flags.mk...done. 00:22:24.598 Type 'gmake' to build. 00:22:24.598 06:35:35 -- spdk/autopackage.sh@43 -- $ gmake -j10 00:22:24.598 gmake[1]: Nothing to be done for 'all'. 00:22:24.598 ps: stdin: not a terminal 00:22:28.802 The Meson build system 00:22:28.802 Version: 1.4.0 00:22:28.802 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:22:28.802 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:22:28.802 Build type: native build 00:22:28.802 Program cat found: YES (/bin/cat) 00:22:28.802 Project name: DPDK 00:22:28.802 Project version: 24.03.0 00:22:28.802 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:22:28.802 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:22:28.802 Host machine cpu family: x86_64 00:22:28.802 Host machine cpu: x86_64 00:22:28.802 Message: ## Building in Developer Mode ## 00:22:28.802 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:22:28.802 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:22:28.802 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:22:28.802 Program python3 found: YES (/usr/local/bin/python3.9) 00:22:28.802 Program cat found: YES (/bin/cat) 00:22:28.802 Compiler for C supports arguments -march=native: YES 00:22:28.802 Checking for size of "void *" : 8 00:22:28.802 Checking for size of "void *" : 8 (cached) 00:22:28.802 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:22:28.802 Library m found: YES 00:22:28.802 Library numa found: NO 00:22:28.802 Library fdt found: NO 00:22:28.802 Library execinfo found: YES 00:22:28.802 Has header "execinfo.h" : YES 00:22:28.802 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:22:28.802 Run-time dependency libarchive found: NO (tried pkgconfig) 00:22:28.802 Run-time dependency libbsd found: NO (tried pkgconfig) 00:22:28.802 Run-time dependency jansson found: NO (tried pkgconfig) 00:22:28.802 Run-time dependency openssl found: YES 3.0.13 00:22:28.802 Run-time dependency libpcap found: NO (tried pkgconfig) 00:22:28.802 Library pcap found: YES 00:22:28.802 Has header "pcap.h" with dependency -lpcap: YES 00:22:28.802 Compiler for C supports arguments -Wcast-qual: YES 00:22:28.802 Compiler for C supports arguments -Wdeprecated: YES 00:22:28.802 Compiler for C supports arguments -Wformat: YES 00:22:28.802 Compiler for C supports arguments -Wformat-nonliteral: YES 00:22:28.802 Compiler for C supports arguments -Wformat-security: YES 00:22:28.802 Compiler for C supports arguments -Wmissing-declarations: YES 00:22:28.802 Compiler for C supports arguments -Wmissing-prototypes: YES 00:22:28.802 Compiler for C supports arguments -Wnested-externs: YES 00:22:28.802 Compiler for C supports arguments -Wold-style-definition: YES 00:22:28.802 Compiler for C supports arguments -Wpointer-arith: YES 00:22:28.802 Compiler for C supports arguments -Wsign-compare: YES 00:22:28.802 Compiler for C supports arguments -Wstrict-prototypes: YES 00:22:28.802 Compiler for C supports arguments -Wundef: YES 00:22:28.802 Compiler for C supports arguments -Wwrite-strings: YES 00:22:28.802 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:22:28.802 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:22:28.802 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:22:28.802 Compiler for C supports arguments -mavx512f: YES 00:22:28.802 Checking if "AVX512 checking" compiles: YES 00:22:28.802 Fetching value of define "__SSE4_2__" : 1 00:22:28.802 Fetching value of define "__AES__" : 1 00:22:28.802 Fetching value of define "__AVX__" : 1 00:22:28.802 Fetching value of define "__AVX2__" : 1 00:22:28.802 Fetching value of define "__AVX512BW__" : (undefined) 00:22:28.802 Fetching value of define "__AVX512CD__" : (undefined) 00:22:28.802 Fetching value of define "__AVX512DQ__" : (undefined) 00:22:28.802 Fetching value of define "__AVX512F__" : (undefined) 00:22:28.802 Fetching value of define "__AVX512VL__" : (undefined) 00:22:28.802 Fetching value of define "__PCLMUL__" : 1 00:22:28.802 Fetching value of define "__RDRND__" : 1 00:22:28.802 Fetching value of define "__RDSEED__" : 1 00:22:28.803 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:22:28.803 Fetching value of define "__znver1__" : (undefined) 00:22:28.803 Fetching value of define "__znver2__" : (undefined) 00:22:28.803 Fetching value of define "__znver3__" : (undefined) 00:22:28.803 Fetching value of define "__znver4__" : (undefined) 00:22:28.803 Compiler for C supports arguments -Wno-format-truncation: NO 00:22:28.803 Message: lib/log: Defining dependency "log" 00:22:28.803 Message: lib/kvargs: Defining dependency "kvargs" 00:22:28.803 Message: lib/telemetry: Defining dependency "telemetry" 00:22:28.803 Checking if "Detect argument count for CPU_OR" compiles: YES 00:22:28.803 Checking for function "getentropy" : YES 00:22:28.803 Message: lib/eal: Defining dependency "eal" 00:22:28.803 Message: lib/ring: Defining dependency "ring" 00:22:28.803 Message: lib/rcu: Defining dependency "rcu" 00:22:28.803 Message: lib/mempool: Defining dependency "mempool" 00:22:28.803 Message: lib/mbuf: Defining dependency "mbuf" 00:22:28.803 Fetching value of define "__PCLMUL__" : 1 (cached) 00:22:28.803 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:22:28.803 Compiler for C supports arguments -mpclmul: YES 00:22:28.803 Compiler for C supports arguments -maes: YES 00:22:28.803 Compiler for C supports arguments -mavx512f: YES (cached) 00:22:28.803 Compiler for C supports arguments -mavx512bw: YES 00:22:28.803 Compiler for C supports arguments -mavx512dq: YES 00:22:28.803 Compiler for C supports arguments -mavx512vl: YES 00:22:28.803 Compiler for C supports arguments -mvpclmulqdq: YES 00:22:28.803 Compiler for C supports arguments -mavx2: YES 00:22:28.803 Compiler for C supports arguments -mavx: YES 00:22:28.803 Message: lib/net: Defining dependency "net" 00:22:28.803 Message: lib/meter: Defining dependency "meter" 00:22:28.803 Message: lib/ethdev: Defining dependency "ethdev" 00:22:28.803 Message: lib/pci: Defining dependency "pci" 00:22:28.803 Message: lib/cmdline: Defining dependency "cmdline" 00:22:28.803 Message: lib/hash: Defining dependency "hash" 00:22:28.803 Message: lib/timer: Defining dependency "timer" 00:22:28.803 Message: lib/compressdev: Defining dependency "compressdev" 00:22:28.803 Message: lib/cryptodev: Defining dependency "cryptodev" 00:22:28.803 Message: lib/dmadev: Defining dependency "dmadev" 00:22:28.803 Compiler for C supports arguments -Wno-cast-qual: YES 00:22:28.803 Message: lib/reorder: Defining dependency "reorder" 00:22:28.803 Message: lib/security: Defining dependency "security" 00:22:28.803 Has header "linux/userfaultfd.h" : NO 00:22:28.803 Has header "linux/vduse.h" : NO 00:22:28.803 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:22:28.803 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:22:28.803 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:22:28.803 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:22:28.803 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:22:28.803 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:22:28.803 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:22:28.803 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:22:28.803 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:22:28.803 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:22:28.803 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:22:28.803 Program doxygen found: YES (/usr/local/bin/doxygen) 00:22:28.803 Configuring doxy-api-html.conf using configuration 00:22:28.803 Configuring doxy-api-man.conf using configuration 00:22:28.803 Program mandb found: NO 00:22:28.803 Program sphinx-build found: NO 00:22:28.803 Configuring rte_build_config.h using configuration 00:22:28.803 Message: 00:22:28.803 ================= 00:22:28.803 Applications Enabled 00:22:28.803 ================= 00:22:28.803 00:22:28.803 apps: 00:22:28.803 00:22:28.803 00:22:28.803 Message: 00:22:28.803 ================= 00:22:28.803 Libraries Enabled 00:22:28.803 ================= 00:22:28.803 00:22:28.803 libs: 00:22:28.803 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:22:28.803 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:22:28.803 cryptodev, dmadev, reorder, security, 00:22:28.803 00:22:28.803 Message: 00:22:28.803 =============== 00:22:28.803 Drivers Enabled 00:22:28.803 =============== 00:22:28.803 00:22:28.803 common: 00:22:28.803 00:22:28.803 bus: 00:22:28.803 pci, vdev, 00:22:28.803 mempool: 00:22:28.803 ring, 00:22:28.803 dma: 00:22:28.803 00:22:28.803 net: 00:22:28.803 00:22:28.803 crypto: 00:22:28.803 00:22:28.803 compress: 00:22:28.803 00:22:28.803 00:22:28.803 Message: 00:22:28.803 ================= 00:22:28.803 Content Skipped 00:22:28.803 ================= 00:22:28.803 00:22:28.803 apps: 00:22:28.803 dumpcap: explicitly disabled via build config 00:22:28.803 graph: explicitly disabled via build config 00:22:28.803 pdump: explicitly disabled via build config 00:22:28.803 proc-info: explicitly disabled via build config 00:22:28.803 test-acl: explicitly disabled via build config 00:22:28.803 test-bbdev: explicitly disabled via build config 00:22:28.803 test-cmdline: explicitly disabled via build config 00:22:28.803 test-compress-perf: explicitly disabled via build config 00:22:28.803 test-crypto-perf: explicitly disabled via build config 00:22:28.803 test-dma-perf: explicitly disabled via build config 00:22:28.803 test-eventdev: explicitly disabled via build config 00:22:28.803 test-fib: explicitly disabled via build config 00:22:28.803 test-flow-perf: explicitly disabled via build config 00:22:28.803 test-gpudev: explicitly disabled via build config 00:22:28.803 test-mldev: explicitly disabled via build config 00:22:28.803 test-pipeline: explicitly disabled via build config 00:22:28.803 test-pmd: explicitly disabled via build config 00:22:28.803 test-regex: explicitly disabled via build config 00:22:28.803 test-sad: explicitly disabled via build config 00:22:28.803 test-security-perf: explicitly disabled via build config 00:22:28.803 00:22:28.803 libs: 00:22:28.803 argparse: explicitly disabled via build config 00:22:28.803 metrics: explicitly disabled via build config 00:22:28.803 acl: explicitly disabled via build config 00:22:28.803 bbdev: explicitly disabled via build config 00:22:28.803 bitratestats: explicitly disabled via build config 00:22:28.803 bpf: explicitly disabled via build config 00:22:28.803 cfgfile: explicitly disabled via build config 00:22:28.803 distributor: explicitly disabled via build config 00:22:28.803 efd: explicitly disabled via build config 00:22:28.803 eventdev: explicitly disabled via build config 00:22:28.803 dispatcher: explicitly disabled via build config 00:22:28.803 gpudev: explicitly disabled via build config 00:22:28.803 gro: explicitly disabled via build config 00:22:28.803 gso: explicitly disabled via build config 00:22:28.803 ip_frag: explicitly disabled via build config 00:22:28.803 jobstats: explicitly disabled via build config 00:22:28.803 latencystats: explicitly disabled via build config 00:22:28.803 lpm: explicitly disabled via build config 00:22:28.803 member: explicitly disabled via build config 00:22:28.803 pcapng: explicitly disabled via build config 00:22:28.803 power: only supported on Linux 00:22:28.803 rawdev: explicitly disabled via build config 00:22:28.803 regexdev: explicitly disabled via build config 00:22:28.803 mldev: explicitly disabled via build config 00:22:28.803 rib: explicitly disabled via build config 00:22:28.803 sched: explicitly disabled via build config 00:22:28.803 stack: explicitly disabled via build config 00:22:28.803 vhost: only supported on Linux 00:22:28.803 ipsec: explicitly disabled via build config 00:22:28.803 pdcp: explicitly disabled via build config 00:22:28.803 fib: explicitly disabled via build config 00:22:28.803 port: explicitly disabled via build config 00:22:28.803 pdump: explicitly disabled via build config 00:22:28.803 table: explicitly disabled via build config 00:22:28.803 pipeline: explicitly disabled via build config 00:22:28.803 graph: explicitly disabled via build config 00:22:28.803 node: explicitly disabled via build config 00:22:28.803 00:22:28.803 drivers: 00:22:28.803 common/cpt: not in enabled drivers build config 00:22:28.803 common/dpaax: not in enabled drivers build config 00:22:28.803 common/iavf: not in enabled drivers build config 00:22:28.803 common/idpf: not in enabled drivers build config 00:22:28.803 common/ionic: not in enabled drivers build config 00:22:28.803 common/mvep: not in enabled drivers build config 00:22:28.803 common/octeontx: not in enabled drivers build config 00:22:28.803 bus/auxiliary: not in enabled drivers build config 00:22:28.803 bus/cdx: not in enabled drivers build config 00:22:28.803 bus/dpaa: not in enabled drivers build config 00:22:28.803 bus/fslmc: not in enabled drivers build config 00:22:28.803 bus/ifpga: not in enabled drivers build config 00:22:28.803 bus/platform: not in enabled drivers build config 00:22:28.803 bus/uacce: not in enabled drivers build config 00:22:28.803 bus/vmbus: not in enabled drivers build config 00:22:28.803 common/cnxk: not in enabled drivers build config 00:22:28.803 common/mlx5: not in enabled drivers build config 00:22:28.803 common/nfp: not in enabled drivers build config 00:22:28.803 common/nitrox: not in enabled drivers build config 00:22:28.803 common/qat: not in enabled drivers build config 00:22:28.803 common/sfc_efx: not in enabled drivers build config 00:22:28.803 mempool/bucket: not in enabled drivers build config 00:22:28.803 mempool/cnxk: not in enabled drivers build config 00:22:28.803 mempool/dpaa: not in enabled drivers build config 00:22:28.803 mempool/dpaa2: not in enabled drivers build config 00:22:28.803 mempool/octeontx: not in enabled drivers build config 00:22:28.804 mempool/stack: not in enabled drivers build config 00:22:28.804 dma/cnxk: not in enabled drivers build config 00:22:28.804 dma/dpaa: not in enabled drivers build config 00:22:28.804 dma/dpaa2: not in enabled drivers build config 00:22:28.804 dma/hisilicon: not in enabled drivers build config 00:22:28.804 dma/idxd: not in enabled drivers build config 00:22:28.804 dma/ioat: not in enabled drivers build config 00:22:28.804 dma/skeleton: not in enabled drivers build config 00:22:28.804 net/af_packet: not in enabled drivers build config 00:22:28.804 net/af_xdp: not in enabled drivers build config 00:22:28.804 net/ark: not in enabled drivers build config 00:22:28.804 net/atlantic: not in enabled drivers build config 00:22:28.804 net/avp: not in enabled drivers build config 00:22:28.804 net/axgbe: not in enabled drivers build config 00:22:28.804 net/bnx2x: not in enabled drivers build config 00:22:28.804 net/bnxt: not in enabled drivers build config 00:22:28.804 net/bonding: not in enabled drivers build config 00:22:28.804 net/cnxk: not in enabled drivers build config 00:22:28.804 net/cpfl: not in enabled drivers build config 00:22:28.804 net/cxgbe: not in enabled drivers build config 00:22:28.804 net/dpaa: not in enabled drivers build config 00:22:28.804 net/dpaa2: not in enabled drivers build config 00:22:28.804 net/e1000: not in enabled drivers build config 00:22:28.804 net/ena: not in enabled drivers build config 00:22:28.804 net/enetc: not in enabled drivers build config 00:22:28.804 net/enetfec: not in enabled drivers build config 00:22:28.804 net/enic: not in enabled drivers build config 00:22:28.804 net/failsafe: not in enabled drivers build config 00:22:28.804 net/fm10k: not in enabled drivers build config 00:22:28.804 net/gve: not in enabled drivers build config 00:22:28.804 net/hinic: not in enabled drivers build config 00:22:28.804 net/hns3: not in enabled drivers build config 00:22:28.804 net/i40e: not in enabled drivers build config 00:22:28.804 net/iavf: not in enabled drivers build config 00:22:28.804 net/ice: not in enabled drivers build config 00:22:28.804 net/idpf: not in enabled drivers build config 00:22:28.804 net/igc: not in enabled drivers build config 00:22:28.804 net/ionic: not in enabled drivers build config 00:22:28.804 net/ipn3ke: not in enabled drivers build config 00:22:28.804 net/ixgbe: not in enabled drivers build config 00:22:28.804 net/mana: not in enabled drivers build config 00:22:28.804 net/memif: not in enabled drivers build config 00:22:28.804 net/mlx4: not in enabled drivers build config 00:22:28.804 net/mlx5: not in enabled drivers build config 00:22:28.804 net/mvneta: not in enabled drivers build config 00:22:28.804 net/mvpp2: not in enabled drivers build config 00:22:28.804 net/netvsc: not in enabled drivers build config 00:22:28.804 net/nfb: not in enabled drivers build config 00:22:28.804 net/nfp: not in enabled drivers build config 00:22:28.804 net/ngbe: not in enabled drivers build config 00:22:28.804 net/null: not in enabled drivers build config 00:22:28.804 net/octeontx: not in enabled drivers build config 00:22:28.804 net/octeon_ep: not in enabled drivers build config 00:22:28.804 net/pcap: not in enabled drivers build config 00:22:28.804 net/pfe: not in enabled drivers build config 00:22:28.804 net/qede: not in enabled drivers build config 00:22:28.804 net/ring: not in enabled drivers build config 00:22:28.804 net/sfc: not in enabled drivers build config 00:22:28.804 net/softnic: not in enabled drivers build config 00:22:28.804 net/tap: not in enabled drivers build config 00:22:28.804 net/thunderx: not in enabled drivers build config 00:22:28.804 net/txgbe: not in enabled drivers build config 00:22:28.804 net/vdev_netvsc: not in enabled drivers build config 00:22:28.804 net/vhost: not in enabled drivers build config 00:22:28.804 net/virtio: not in enabled drivers build config 00:22:28.804 net/vmxnet3: not in enabled drivers build config 00:22:28.804 raw/*: missing internal dependency, "rawdev" 00:22:28.804 crypto/armv8: not in enabled drivers build config 00:22:28.804 crypto/bcmfs: not in enabled drivers build config 00:22:28.804 crypto/caam_jr: not in enabled drivers build config 00:22:28.804 crypto/ccp: not in enabled drivers build config 00:22:28.804 crypto/cnxk: not in enabled drivers build config 00:22:28.804 crypto/dpaa_sec: not in enabled drivers build config 00:22:28.804 crypto/dpaa2_sec: not in enabled drivers build config 00:22:28.804 crypto/ipsec_mb: not in enabled drivers build config 00:22:28.804 crypto/mlx5: not in enabled drivers build config 00:22:28.804 crypto/mvsam: not in enabled drivers build config 00:22:28.804 crypto/nitrox: not in enabled drivers build config 00:22:28.804 crypto/null: not in enabled drivers build config 00:22:28.804 crypto/octeontx: not in enabled drivers build config 00:22:28.804 crypto/openssl: not in enabled drivers build config 00:22:28.804 crypto/scheduler: not in enabled drivers build config 00:22:28.804 crypto/uadk: not in enabled drivers build config 00:22:28.804 crypto/virtio: not in enabled drivers build config 00:22:28.804 compress/isal: not in enabled drivers build config 00:22:28.804 compress/mlx5: not in enabled drivers build config 00:22:28.804 compress/nitrox: not in enabled drivers build config 00:22:28.804 compress/octeontx: not in enabled drivers build config 00:22:28.804 compress/zlib: not in enabled drivers build config 00:22:28.804 regex/*: missing internal dependency, "regexdev" 00:22:28.804 ml/*: missing internal dependency, "mldev" 00:22:28.804 vdpa/*: missing internal dependency, "vhost" 00:22:28.804 event/*: missing internal dependency, "eventdev" 00:22:28.804 baseband/*: missing internal dependency, "bbdev" 00:22:28.804 gpu/*: missing internal dependency, "gpudev" 00:22:28.804 00:22:28.804 00:22:29.063 Build targets in project: 81 00:22:29.063 00:22:29.063 DPDK 24.03.0 00:22:29.063 00:22:29.063 User defined options 00:22:29.063 default_library : static 00:22:29.063 libdir : lib 00:22:29.063 prefix : / 00:22:29.063 c_args : -fPIC -Werror 00:22:29.063 c_link_args : 00:22:29.063 cpu_instruction_set: native 00:22:29.063 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:22:29.063 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:22:29.063 enable_docs : false 00:22:29.063 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:22:29.063 enable_kmods : true 00:22:29.063 max_lcores : 128 00:22:29.063 tests : false 00:22:29.063 00:22:29.063 Found ninja-1.11.1 at /usr/local/bin/ninja 00:22:29.629 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:22:29.629 [1/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:22:29.629 [2/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:22:29.629 [3/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:22:29.629 [4/233] Linking static target lib/librte_kvargs.a 00:22:29.629 [5/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:22:29.629 [6/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:22:29.629 [7/233] Linking static target lib/librte_log.a 00:22:29.629 [8/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:22:29.887 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:22:29.887 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:22:30.145 [11/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:22:30.145 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:22:30.145 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:22:30.145 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:22:30.145 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:22:30.145 [16/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:22:30.145 [17/233] Linking static target lib/librte_telemetry.a 00:22:30.403 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:22:30.403 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:22:30.403 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:22:30.661 [21/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:22:30.661 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:22:30.661 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:22:30.661 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:22:30.661 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:22:30.661 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:22:30.661 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:22:30.918 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:22:30.918 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:22:30.918 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:22:30.918 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:22:30.918 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:22:30.918 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:22:30.918 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:22:31.176 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:22:31.176 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:22:31.176 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:22:31.176 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:22:31.176 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:22:31.434 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:22:31.434 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:22:31.434 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:22:31.693 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:22:31.693 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:22:31.693 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:22:31.693 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:22:31.693 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:22:31.693 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:22:31.693 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:22:31.693 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:22:31.693 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:22:31.950 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:22:31.950 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:22:31.950 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:22:31.950 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:22:31.950 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:22:32.208 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:22:32.208 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:22:32.208 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:22:32.208 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:22:32.208 [61/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:22:32.467 [62/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:22:32.467 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:22:32.467 [64/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:22:32.467 [65/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:22:32.467 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:22:32.467 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:22:32.467 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:22:32.467 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:22:32.725 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:22:32.725 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:22:32.725 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:22:32.725 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:22:32.725 [74/233] Linking static target lib/librte_eal.a 00:22:32.984 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:22:32.984 [76/233] Linking static target lib/librte_ring.a 00:22:32.984 [77/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:22:32.984 [78/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:22:32.984 [79/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:22:33.242 [80/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:22:33.242 [81/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:22:33.242 [82/233] Linking target lib/librte_log.so.24.1 00:22:33.242 [83/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:22:33.242 [84/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:22:33.242 [85/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:22:33.242 [86/233] Linking target lib/librte_kvargs.so.24.1 00:22:33.242 [87/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:22:33.242 [88/233] Linking static target lib/librte_mempool.a 00:22:33.500 [89/233] Linking target lib/librte_telemetry.so.24.1 00:22:33.500 [90/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:22:33.500 [91/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:22:33.500 [92/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:22:33.500 [93/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:22:33.500 [94/233] Linking static target lib/librte_rcu.a 00:22:33.500 [95/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:22:33.500 [96/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:22:33.500 [97/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:22:33.759 [98/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:22:33.759 [99/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:22:33.759 [100/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:22:33.759 [101/233] Linking static target lib/librte_mbuf.a 00:22:33.759 [102/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:22:34.017 [103/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:22:34.017 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:22:34.017 [105/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:22:34.017 [106/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:22:34.017 [107/233] Linking static target lib/librte_meter.a 00:22:34.018 [108/233] Linking static target lib/librte_net.a 00:22:34.276 [109/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:22:34.276 [110/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:22:34.276 [111/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:22:34.276 [112/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:22:34.276 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:22:34.276 [114/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:22:34.276 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:22:34.842 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:22:34.842 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:22:35.100 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:22:35.100 [119/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:22:35.100 [120/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:22:35.100 [121/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:22:35.100 [122/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:22:35.100 [123/233] Linking static target lib/librte_pci.a 00:22:35.100 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:22:35.100 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:22:35.100 [126/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:22:35.100 [127/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:22:35.358 [128/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:22:35.358 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:22:35.358 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:22:35.358 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:22:35.358 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:22:35.358 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:22:35.358 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:22:35.358 [135/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:22:35.358 [136/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:22:35.358 [137/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:22:35.358 [138/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:22:35.615 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:22:35.615 [140/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:22:35.615 [141/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:22:35.615 [142/233] Linking static target lib/librte_cmdline.a 00:22:35.873 [143/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:22:35.873 [144/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:22:35.873 [145/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:22:35.873 [146/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:22:36.132 [147/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:22:36.132 [148/233] Linking static target lib/librte_timer.a 00:22:36.132 [149/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:22:36.391 [150/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:22:36.391 [151/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:22:36.391 [152/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:22:36.391 [153/233] Linking static target lib/librte_compressdev.a 00:22:36.391 [154/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:22:36.650 [155/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:22:36.650 [156/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:22:36.650 [157/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:22:36.650 [158/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:22:36.650 [159/233] Linking static target lib/librte_hash.a 00:22:36.910 [160/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:22:36.910 [161/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:22:36.910 [162/233] Linking static target lib/librte_ethdev.a 00:22:36.910 [163/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:22:36.910 [164/233] Linking static target lib/librte_dmadev.a 00:22:36.910 [165/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.186 [166/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:22:37.186 [167/233] Linking static target lib/librte_reorder.a 00:22:37.186 [168/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:22:37.186 [169/233] Linking static target lib/librte_security.a 00:22:37.186 [170/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:22:37.186 [171/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:22:37.186 [172/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:22:37.186 [173/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.186 [174/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.444 [175/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.444 [176/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.444 [177/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.444 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:22:37.444 [179/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:22:37.444 [180/233] Generating kernel/freebsd/contigmem with a custom command 00:22:37.444 machine -> /usr/src/sys/amd64/include 00:22:37.444 x86 -> /usr/src/sys/x86/include 00:22:37.444 i386 -> /usr/src/sys/i386/include 00:22:37.444 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:22:37.444 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:22:37.444 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:22:37.444 touch opt_global.h 00:22:37.444 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:22:37.444 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:22:37.444 :> export_syms 00:22:37.444 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:22:37.444 objcopy --strip-debug contigmem.ko 00:22:37.444 [181/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:22:37.444 [182/233] Linking static target lib/librte_cryptodev.a 00:22:37.702 [183/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:22:37.703 [184/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:22:37.703 [185/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:22:37.703 [186/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:22:37.703 [187/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:22:37.703 [188/233] Linking static target drivers/librte_bus_pci.a 00:22:37.703 [189/233] Generating kernel/freebsd/nic_uio with a custom command 00:22:37.703 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:22:37.703 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:22:37.703 :> export_syms 00:22:37.703 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:22:37.703 objcopy --strip-debug nic_uio.ko 00:22:37.703 [190/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:22:37.961 [191/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:22:37.961 [192/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:22:37.961 [193/233] Linking static target drivers/librte_bus_vdev.a 00:22:37.961 [194/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:22:37.961 [195/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:38.220 [196/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:22:38.220 [197/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:22:38.220 [198/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:38.479 [199/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:22:38.479 [200/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:22:38.479 [201/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:22:38.479 [202/233] Linking static target drivers/librte_mempool_ring.a 00:22:42.668 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:43.603 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:22:43.603 [205/233] Linking target lib/librte_eal.so.24.1 00:22:43.861 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:22:43.861 [207/233] Linking target lib/librte_pci.so.24.1 00:22:43.861 [208/233] Linking target lib/librte_ring.so.24.1 00:22:43.861 [209/233] Linking target lib/librte_meter.so.24.1 00:22:43.861 [210/233] Linking target drivers/librte_bus_vdev.so.24.1 00:22:43.861 [211/233] Linking target lib/librte_timer.so.24.1 00:22:43.861 [212/233] Linking target lib/librte_dmadev.so.24.1 00:22:43.861 [213/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:22:43.861 [214/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:22:43.861 [215/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:22:44.119 [216/233] Linking target lib/librte_mempool.so.24.1 00:22:44.119 [217/233] Linking target drivers/librte_bus_pci.so.24.1 00:22:44.119 [218/233] Linking target lib/librte_rcu.so.24.1 00:22:44.119 [219/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:22:44.119 [220/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:22:44.119 [221/233] Linking target lib/librte_mbuf.so.24.1 00:22:44.119 [222/233] Linking target drivers/librte_mempool_ring.so.24.1 00:22:44.377 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:22:44.377 [224/233] Linking target lib/librte_reorder.so.24.1 00:22:44.377 [225/233] Linking target lib/librte_net.so.24.1 00:22:44.377 [226/233] Linking target lib/librte_compressdev.so.24.1 00:22:44.377 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:22:44.377 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:22:44.377 [229/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:22:44.377 [230/233] Linking target lib/librte_hash.so.24.1 00:22:44.377 [231/233] Linking target lib/librte_security.so.24.1 00:22:44.377 [232/233] Linking target lib/librte_cmdline.so.24.1 00:22:44.377 [233/233] Linking target lib/librte_ethdev.so.24.1 00:22:44.377 INFO: autodetecting backend as ninja 00:22:44.377 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:22:45.311 CC lib/log/log_flags.o 00:22:45.312 CC lib/ut_mock/mock.o 00:22:45.312 CC lib/log/log.o 00:22:45.312 CC lib/log/log_deprecated.o 00:22:45.312 CC lib/ut/ut.o 00:22:45.312 LIB libspdk_ut_mock.a 00:22:45.312 LIB libspdk_ut.a 00:22:45.312 LIB libspdk_log.a 00:22:45.570 CC lib/util/base64.o 00:22:45.570 CC lib/util/bit_array.o 00:22:45.570 CC lib/util/cpuset.o 00:22:45.570 CC lib/util/crc16.o 00:22:45.570 CC lib/ioat/ioat.o 00:22:45.570 CC lib/util/crc32.o 00:22:45.570 CC lib/util/crc32c.o 00:22:45.570 CC lib/util/crc32_ieee.o 00:22:45.570 CC lib/dma/dma.o 00:22:45.570 CXX lib/trace_parser/trace.o 00:22:45.570 CC lib/util/crc64.o 00:22:45.570 CC lib/util/dif.o 00:22:45.570 CC lib/util/fd.o 00:22:45.570 CC lib/util/fd_group.o 00:22:45.570 CC lib/util/file.o 00:22:45.570 CC lib/util/hexlify.o 00:22:45.570 LIB libspdk_dma.a 00:22:45.570 CC lib/util/iov.o 00:22:45.570 CC lib/util/math.o 00:22:45.570 CC lib/util/net.o 00:22:45.570 CC lib/util/pipe.o 00:22:45.570 CC lib/util/strerror_tls.o 00:22:45.829 CC lib/util/string.o 00:22:45.829 CC lib/util/uuid.o 00:22:45.829 LIB libspdk_ioat.a 00:22:45.829 CC lib/util/xor.o 00:22:45.829 CC lib/util/zipf.o 00:22:46.087 LIB libspdk_util.a 00:22:46.346 CC lib/conf/conf.o 00:22:46.346 CC lib/idxd/idxd.o 00:22:46.346 CC lib/idxd/idxd_user.o 00:22:46.346 CC lib/json/json_parse.o 00:22:46.346 CC lib/json/json_util.o 00:22:46.346 CC lib/env_dpdk/env.o 00:22:46.346 CC lib/vmd/vmd.o 00:22:46.346 CC lib/rdma_provider/common.o 00:22:46.346 CC lib/rdma_utils/rdma_utils.o 00:22:46.346 CC lib/rdma_provider/rdma_provider_verbs.o 00:22:46.346 CC lib/env_dpdk/memory.o 00:22:46.346 LIB libspdk_conf.a 00:22:46.346 CC lib/vmd/led.o 00:22:46.346 CC lib/json/json_write.o 00:22:46.346 LIB libspdk_rdma_utils.a 00:22:46.346 CC lib/env_dpdk/pci.o 00:22:46.346 CC lib/env_dpdk/init.o 00:22:46.603 CC lib/env_dpdk/threads.o 00:22:46.604 LIB libspdk_rdma_provider.a 00:22:46.604 CC lib/env_dpdk/pci_ioat.o 00:22:46.604 CC lib/env_dpdk/pci_virtio.o 00:22:46.604 CC lib/env_dpdk/pci_vmd.o 00:22:46.604 LIB libspdk_idxd.a 00:22:46.604 CC lib/env_dpdk/pci_idxd.o 00:22:46.604 CC lib/env_dpdk/pci_event.o 00:22:46.604 CC lib/env_dpdk/sigbus_handler.o 00:22:46.604 LIB libspdk_vmd.a 00:22:46.604 CC lib/env_dpdk/pci_dpdk.o 00:22:46.604 CC lib/env_dpdk/pci_dpdk_2207.o 00:22:46.604 CC lib/env_dpdk/pci_dpdk_2211.o 00:22:46.604 LIB libspdk_json.a 00:22:46.882 LIB libspdk_trace_parser.a 00:22:46.882 CC lib/jsonrpc/jsonrpc_server.o 00:22:46.882 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:22:46.882 CC lib/jsonrpc/jsonrpc_client.o 00:22:46.882 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:22:46.882 LIB libspdk_jsonrpc.a 00:22:47.144 CC lib/rpc/rpc.o 00:22:47.144 LIB libspdk_rpc.a 00:22:47.402 CC lib/notify/notify.o 00:22:47.402 CC lib/notify/notify_rpc.o 00:22:47.402 CC lib/keyring/keyring.o 00:22:47.402 CC lib/keyring/keyring_rpc.o 00:22:47.402 CC lib/trace/trace.o 00:22:47.402 CC lib/trace/trace_flags.o 00:22:47.402 CC lib/trace/trace_rpc.o 00:22:47.402 LIB libspdk_env_dpdk.a 00:22:47.402 LIB libspdk_notify.a 00:22:47.402 LIB libspdk_keyring.a 00:22:47.402 LIB libspdk_trace.a 00:22:47.661 CC lib/sock/sock.o 00:22:47.661 CC lib/sock/sock_rpc.o 00:22:47.661 CC lib/thread/thread.o 00:22:47.661 CC lib/thread/iobuf.o 00:22:47.919 LIB libspdk_sock.a 00:22:47.919 CC lib/nvme/nvme_ctrlr.o 00:22:47.919 CC lib/nvme/nvme_ctrlr_cmd.o 00:22:47.919 CC lib/nvme/nvme_fabric.o 00:22:47.919 CC lib/nvme/nvme_ns.o 00:22:47.919 CC lib/nvme/nvme_pcie_common.o 00:22:47.919 CC lib/nvme/nvme_ns_cmd.o 00:22:47.919 CC lib/nvme/nvme.o 00:22:47.919 CC lib/nvme/nvme_pcie.o 00:22:47.919 CC lib/nvme/nvme_qpair.o 00:22:48.177 LIB libspdk_thread.a 00:22:48.177 CC lib/nvme/nvme_quirks.o 00:22:48.743 CC lib/accel/accel.o 00:22:48.743 CC lib/accel/accel_rpc.o 00:22:48.743 CC lib/accel/accel_sw.o 00:22:48.743 CC lib/nvme/nvme_transport.o 00:22:48.743 CC lib/blob/blobstore.o 00:22:48.743 CC lib/blob/request.o 00:22:48.743 CC lib/init/json_config.o 00:22:48.743 CC lib/blob/zeroes.o 00:22:48.743 CC lib/init/subsystem.o 00:22:48.743 CC lib/nvme/nvme_discovery.o 00:22:48.743 CC lib/blob/blob_bs_dev.o 00:22:48.743 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:22:48.743 CC lib/init/subsystem_rpc.o 00:22:48.743 CC lib/init/rpc.o 00:22:48.743 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:22:48.743 CC lib/nvme/nvme_tcp.o 00:22:48.743 CC lib/nvme/nvme_opal.o 00:22:49.001 LIB libspdk_init.a 00:22:49.001 CC lib/nvme/nvme_io_msg.o 00:22:49.001 LIB libspdk_accel.a 00:22:49.001 CC lib/nvme/nvme_poll_group.o 00:22:49.259 CC lib/event/app.o 00:22:49.259 CC lib/event/reactor.o 00:22:49.259 CC lib/bdev/bdev.o 00:22:49.259 CC lib/event/log_rpc.o 00:22:49.259 CC lib/event/app_rpc.o 00:22:49.259 CC lib/bdev/bdev_rpc.o 00:22:49.518 CC lib/event/scheduler_static.o 00:22:49.518 CC lib/nvme/nvme_zns.o 00:22:49.518 CC lib/bdev/bdev_zone.o 00:22:49.518 CC lib/bdev/part.o 00:22:49.518 CC lib/nvme/nvme_stubs.o 00:22:49.518 LIB libspdk_event.a 00:22:49.518 CC lib/bdev/scsi_nvme.o 00:22:49.518 CC lib/nvme/nvme_auth.o 00:22:49.518 CC lib/nvme/nvme_rdma.o 00:22:50.085 LIB libspdk_blob.a 00:22:50.085 CC lib/blobfs/blobfs.o 00:22:50.085 CC lib/lvol/lvol.o 00:22:50.085 CC lib/blobfs/tree.o 00:22:50.345 LIB libspdk_nvme.a 00:22:50.612 LIB libspdk_blobfs.a 00:22:50.612 LIB libspdk_lvol.a 00:22:50.612 LIB libspdk_bdev.a 00:22:50.612 CC lib/scsi/dev.o 00:22:50.612 CC lib/scsi/lun.o 00:22:50.612 CC lib/scsi/port.o 00:22:50.612 CC lib/scsi/scsi.o 00:22:50.612 CC lib/scsi/scsi_bdev.o 00:22:50.612 CC lib/scsi/scsi_pr.o 00:22:50.612 CC lib/scsi/scsi_rpc.o 00:22:50.612 CC lib/scsi/task.o 00:22:50.612 CC lib/nvmf/ctrlr.o 00:22:50.612 CC lib/nvmf/ctrlr_discovery.o 00:22:50.870 CC lib/nvmf/ctrlr_bdev.o 00:22:50.870 CC lib/nvmf/subsystem.o 00:22:50.870 CC lib/nvmf/nvmf.o 00:22:50.870 CC lib/nvmf/nvmf_rpc.o 00:22:50.870 CC lib/nvmf/transport.o 00:22:50.870 CC lib/nvmf/tcp.o 00:22:50.870 CC lib/nvmf/stubs.o 00:22:50.870 CC lib/nvmf/mdns_server.o 00:22:51.127 LIB libspdk_scsi.a 00:22:51.127 CC lib/nvmf/rdma.o 00:22:51.127 CC lib/nvmf/auth.o 00:22:51.127 CC lib/iscsi/conn.o 00:22:51.127 CC lib/iscsi/init_grp.o 00:22:51.127 CC lib/iscsi/iscsi.o 00:22:51.127 CC lib/iscsi/md5.o 00:22:51.127 CC lib/iscsi/param.o 00:22:51.127 CC lib/iscsi/portal_grp.o 00:22:51.385 CC lib/iscsi/tgt_node.o 00:22:51.385 CC lib/iscsi/iscsi_subsystem.o 00:22:51.385 CC lib/iscsi/iscsi_rpc.o 00:22:51.385 CC lib/iscsi/task.o 00:22:51.951 LIB libspdk_nvmf.a 00:22:51.951 LIB libspdk_iscsi.a 00:22:52.209 CC module/env_dpdk/env_dpdk_rpc.o 00:22:52.209 CC module/blob/bdev/blob_bdev.o 00:22:52.209 CC module/keyring/file/keyring.o 00:22:52.209 CC module/keyring/file/keyring_rpc.o 00:22:52.209 CC module/sock/posix/posix.o 00:22:52.209 CC module/scheduler/dynamic/scheduler_dynamic.o 00:22:52.209 CC module/accel/error/accel_error.o 00:22:52.209 CC module/accel/ioat/accel_ioat.o 00:22:52.209 CC module/accel/dsa/accel_dsa.o 00:22:52.209 CC module/accel/iaa/accel_iaa.o 00:22:52.209 LIB libspdk_env_dpdk_rpc.a 00:22:52.209 CC module/accel/ioat/accel_ioat_rpc.o 00:22:52.209 CC module/accel/iaa/accel_iaa_rpc.o 00:22:52.209 LIB libspdk_keyring_file.a 00:22:52.467 CC module/accel/error/accel_error_rpc.o 00:22:52.467 CC module/accel/dsa/accel_dsa_rpc.o 00:22:52.467 LIB libspdk_accel_ioat.a 00:22:52.467 LIB libspdk_scheduler_dynamic.a 00:22:52.467 LIB libspdk_blob_bdev.a 00:22:52.467 LIB libspdk_accel_iaa.a 00:22:52.467 LIB libspdk_accel_error.a 00:22:52.467 LIB libspdk_accel_dsa.a 00:22:52.467 CC module/bdev/gpt/gpt.o 00:22:52.467 CC module/bdev/delay/vbdev_delay.o 00:22:52.467 CC module/bdev/lvol/vbdev_lvol.o 00:22:52.467 CC module/bdev/error/vbdev_error.o 00:22:52.467 CC module/bdev/malloc/bdev_malloc.o 00:22:52.467 CC module/bdev/passthru/vbdev_passthru.o 00:22:52.467 CC module/bdev/nvme/bdev_nvme.o 00:22:52.467 CC module/blobfs/bdev/blobfs_bdev.o 00:22:52.467 CC module/bdev/null/bdev_null.o 00:22:52.725 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:22:52.725 CC module/bdev/gpt/vbdev_gpt.o 00:22:52.725 CC module/bdev/null/bdev_null_rpc.o 00:22:52.725 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:22:52.725 LIB libspdk_sock_posix.a 00:22:52.725 CC module/bdev/error/vbdev_error_rpc.o 00:22:52.725 CC module/bdev/delay/vbdev_delay_rpc.o 00:22:52.725 CC module/bdev/malloc/bdev_malloc_rpc.o 00:22:52.725 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:22:52.725 LIB libspdk_blobfs_bdev.a 00:22:52.725 CC module/bdev/nvme/bdev_nvme_rpc.o 00:22:52.725 LIB libspdk_bdev_null.a 00:22:52.725 LIB libspdk_bdev_error.a 00:22:52.725 LIB libspdk_bdev_passthru.a 00:22:52.725 CC module/bdev/nvme/nvme_rpc.o 00:22:52.725 CC module/bdev/nvme/bdev_mdns_client.o 00:22:52.725 LIB libspdk_bdev_delay.a 00:22:52.725 LIB libspdk_bdev_gpt.a 00:22:52.725 LIB libspdk_bdev_malloc.a 00:22:52.725 CC module/bdev/raid/bdev_raid.o 00:22:52.725 CC module/bdev/raid/bdev_raid_rpc.o 00:22:52.725 CC module/bdev/split/vbdev_split.o 00:22:52.983 CC module/bdev/zone_block/vbdev_zone_block.o 00:22:52.983 CC module/bdev/aio/bdev_aio.o 00:22:52.983 CC module/bdev/split/vbdev_split_rpc.o 00:22:52.983 CC module/bdev/raid/bdev_raid_sb.o 00:22:52.983 LIB libspdk_bdev_lvol.a 00:22:52.983 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:22:52.983 CC module/bdev/aio/bdev_aio_rpc.o 00:22:52.983 CC module/bdev/raid/raid0.o 00:22:52.983 LIB libspdk_bdev_split.a 00:22:52.983 CC module/bdev/raid/raid1.o 00:22:52.983 CC module/bdev/raid/concat.o 00:22:52.983 LIB libspdk_bdev_zone_block.a 00:22:52.983 LIB libspdk_bdev_aio.a 00:22:53.241 LIB libspdk_bdev_raid.a 00:22:53.498 LIB libspdk_bdev_nvme.a 00:22:53.756 CC module/event/subsystems/sock/sock.o 00:22:53.756 CC module/event/subsystems/scheduler/scheduler.o 00:22:53.756 CC module/event/subsystems/vmd/vmd.o 00:22:53.756 CC module/event/subsystems/vmd/vmd_rpc.o 00:22:53.756 CC module/event/subsystems/keyring/keyring.o 00:22:53.756 CC module/event/subsystems/iobuf/iobuf.o 00:22:53.756 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:22:54.014 LIB libspdk_event_keyring.a 00:22:54.014 LIB libspdk_event_sock.a 00:22:54.014 LIB libspdk_event_scheduler.a 00:22:54.014 LIB libspdk_event_vmd.a 00:22:54.014 LIB libspdk_event_iobuf.a 00:22:54.014 CC module/event/subsystems/accel/accel.o 00:22:54.272 LIB libspdk_event_accel.a 00:22:54.272 CC module/event/subsystems/bdev/bdev.o 00:22:54.530 LIB libspdk_event_bdev.a 00:22:54.530 CC module/event/subsystems/scsi/scsi.o 00:22:54.530 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:22:54.530 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:22:54.530 LIB libspdk_event_scsi.a 00:22:54.787 LIB libspdk_event_nvmf.a 00:22:54.787 CC module/event/subsystems/iscsi/iscsi.o 00:22:54.787 LIB libspdk_event_iscsi.a 00:22:55.046 CXX app/trace/trace.o 00:22:55.046 CC app/trace_record/trace_record.o 00:22:55.046 CC app/spdk_lspci/spdk_lspci.o 00:22:55.046 CC app/spdk_nvme_perf/perf.o 00:22:55.046 CC app/spdk_nvme_identify/identify.o 00:22:55.046 CC app/nvmf_tgt/nvmf_main.o 00:22:55.046 CC app/spdk_tgt/spdk_tgt.o 00:22:55.046 CC app/iscsi_tgt/iscsi_tgt.o 00:22:55.046 CC test/thread/poller_perf/poller_perf.o 00:22:55.046 LINK spdk_lspci 00:22:55.046 CC examples/util/zipf/zipf.o 00:22:55.046 LINK nvmf_tgt 00:22:55.046 LINK poller_perf 00:22:55.046 LINK spdk_tgt 00:22:55.046 LINK iscsi_tgt 00:22:55.046 LINK zipf 00:22:55.304 LINK spdk_trace_record 00:22:55.561 LINK spdk_nvme_perf 00:22:55.561 CC test/thread/lock/spdk_lock.o 00:22:55.561 CC examples/ioat/perf/perf.o 00:22:55.561 LINK spdk_nvme_identify 00:22:55.561 LINK ioat_perf 00:22:55.819 LINK spdk_trace 00:22:56.077 LINK spdk_lock 00:22:56.335 CC examples/ioat/verify/verify.o 00:22:56.335 CC examples/vmd/lsvmd/lsvmd.o 00:22:56.335 LINK lsvmd 00:22:56.335 LINK verify 00:22:57.266 CC test/dma/test_dma/test_dma.o 00:22:57.266 LINK test_dma 00:22:58.200 CC test/app/bdev_svc/bdev_svc.o 00:22:58.200 LINK bdev_svc 00:22:58.766 CC examples/vmd/led/led.o 00:22:59.024 LINK led 00:22:59.957 CC examples/idxd/perf/perf.o 00:22:59.957 LINK idxd_perf 00:23:00.910 CC examples/thread/thread/thread_ex.o 00:23:00.910 LINK thread 00:23:00.910 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:23:01.177 LINK nvme_fuzz 00:23:01.435 TEST_HEADER include/spdk/config.h 00:23:01.435 CXX test/cpp_headers/accel.o 00:23:01.435 CC examples/sock/hello_world/hello_sock.o 00:23:01.693 CXX test/cpp_headers/accel_module.o 00:23:01.693 LINK hello_sock 00:23:01.693 CXX test/cpp_headers/assert.o 00:23:01.952 CXX test/cpp_headers/barrier.o 00:23:01.952 CXX test/cpp_headers/base64.o 00:23:02.215 CXX test/cpp_headers/bdev.o 00:23:02.488 CXX test/cpp_headers/bdev_module.o 00:23:02.488 CXX test/cpp_headers/bdev_zone.o 00:23:02.746 CXX test/cpp_headers/bit_array.o 00:23:03.004 CXX test/cpp_headers/bit_pool.o 00:23:03.004 CXX test/cpp_headers/blob.o 00:23:03.262 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:23:03.262 CXX test/cpp_headers/blob_bdev.o 00:23:03.262 CXX test/cpp_headers/blobfs.o 00:23:03.520 CXX test/cpp_headers/blobfs_bdev.o 00:23:03.779 CXX test/cpp_headers/conf.o 00:23:03.779 CXX test/cpp_headers/config.o 00:23:03.779 CXX test/cpp_headers/cpuset.o 00:23:04.037 CXX test/cpp_headers/crc16.o 00:23:04.037 CXX test/cpp_headers/crc32.o 00:23:04.294 LINK iscsi_fuzz 00:23:04.294 CXX test/cpp_headers/crc64.o 00:23:04.552 CXX test/cpp_headers/dif.o 00:23:04.552 CXX test/cpp_headers/dma.o 00:23:04.809 CXX test/cpp_headers/endian.o 00:23:04.809 CXX test/cpp_headers/env.o 00:23:05.066 CXX test/cpp_headers/env_dpdk.o 00:23:05.066 CC test/app/histogram_perf/histogram_perf.o 00:23:05.066 LINK histogram_perf 00:23:05.066 CXX test/cpp_headers/event.o 00:23:05.324 CXX test/cpp_headers/fd.o 00:23:05.582 CXX test/cpp_headers/fd_group.o 00:23:05.582 CXX test/cpp_headers/file.o 00:23:05.841 CXX test/cpp_headers/ftl.o 00:23:05.841 CC app/spdk_nvme_discover/discovery_aer.o 00:23:05.841 CXX test/cpp_headers/gpt_spec.o 00:23:06.100 LINK spdk_nvme_discover 00:23:06.100 CXX test/cpp_headers/hexlify.o 00:23:06.358 CXX test/cpp_headers/histogram_data.o 00:23:06.358 CXX test/cpp_headers/idxd.o 00:23:06.616 CXX test/cpp_headers/idxd_spec.o 00:23:06.616 CXX test/cpp_headers/init.o 00:23:06.873 CXX test/cpp_headers/ioat.o 00:23:07.131 CXX test/cpp_headers/ioat_spec.o 00:23:07.131 CXX test/cpp_headers/iscsi_spec.o 00:23:07.389 CXX test/cpp_headers/json.o 00:23:07.389 CXX test/cpp_headers/jsonrpc.o 00:23:07.647 CXX test/cpp_headers/keyring.o 00:23:07.906 CXX test/cpp_headers/keyring_module.o 00:23:07.906 CXX test/cpp_headers/likely.o 00:23:08.164 CXX test/cpp_headers/log.o 00:23:08.164 CC test/env/mem_callbacks/mem_callbacks.o 00:23:08.164 CXX test/cpp_headers/lvol.o 00:23:08.423 CXX test/cpp_headers/memory.o 00:23:08.681 CXX test/cpp_headers/mmio.o 00:23:08.681 LINK mem_callbacks 00:23:08.681 CXX test/cpp_headers/nbd.o 00:23:08.681 CXX test/cpp_headers/net.o 00:23:08.681 CC test/env/vtophys/vtophys.o 00:23:08.939 LINK vtophys 00:23:08.939 CXX test/cpp_headers/notify.o 00:23:08.939 CXX test/cpp_headers/nvme.o 00:23:09.197 CXX test/cpp_headers/nvme_intel.o 00:23:09.455 CXX test/cpp_headers/nvme_ocssd.o 00:23:09.455 CXX test/cpp_headers/nvme_ocssd_spec.o 00:23:09.713 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:23:09.713 CXX test/cpp_headers/nvme_spec.o 00:23:09.713 LINK env_dpdk_post_init 00:23:09.971 CXX test/cpp_headers/nvme_zns.o 00:23:09.971 CC app/spdk_top/spdk_top.o 00:23:09.971 CXX test/cpp_headers/nvmf.o 00:23:10.228 CXX test/cpp_headers/nvmf_cmd.o 00:23:10.486 CXX test/cpp_headers/nvmf_fc_spec.o 00:23:10.486 LINK spdk_top 00:23:10.486 CXX test/cpp_headers/nvmf_spec.o 00:23:10.743 CXX test/cpp_headers/nvmf_transport.o 00:23:10.999 CXX test/cpp_headers/opal.o 00:23:10.999 CC test/env/memory/memory_ut.o 00:23:11.256 CXX test/cpp_headers/opal_spec.o 00:23:11.256 CXX test/cpp_headers/pci_ids.o 00:23:11.514 CXX test/cpp_headers/pipe.o 00:23:11.514 CXX test/cpp_headers/queue.o 00:23:11.514 CXX test/cpp_headers/reduce.o 00:23:11.772 CXX test/cpp_headers/rpc.o 00:23:11.772 CC test/app/jsoncat/jsoncat.o 00:23:11.772 LINK jsoncat 00:23:11.772 LINK memory_ut 00:23:12.030 CXX test/cpp_headers/scheduler.o 00:23:12.030 CXX test/cpp_headers/scsi.o 00:23:12.289 CXX test/cpp_headers/scsi_spec.o 00:23:12.289 CC app/fio/nvme/fio_plugin.o 00:23:12.547 CXX test/cpp_headers/sock.o 00:23:12.547 fio_plugin.c:1584:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:23:12.547 struct spdk_nvme_fdp_ruhs ruhs; 00:23:12.547 ^ 00:23:12.547 CC examples/nvme/hello_world/hello_world.o 00:23:12.547 CXX test/cpp_headers/stdinc.o 00:23:12.547 LINK hello_world 00:23:12.547 1 warning generated. 00:23:12.547 LINK spdk_nvme 00:23:12.805 CXX test/cpp_headers/string.o 00:23:12.805 CXX test/cpp_headers/thread.o 00:23:13.063 CXX test/cpp_headers/trace.o 00:23:13.321 CXX test/cpp_headers/trace_parser.o 00:23:13.321 CXX test/cpp_headers/tree.o 00:23:13.321 CXX test/cpp_headers/ublk.o 00:23:13.579 CXX test/cpp_headers/util.o 00:23:13.579 CXX test/cpp_headers/uuid.o 00:23:13.838 CXX test/cpp_headers/version.o 00:23:13.838 CXX test/cpp_headers/vfio_user_pci.o 00:23:14.096 CXX test/cpp_headers/vfio_user_spec.o 00:23:14.096 CXX test/cpp_headers/vhost.o 00:23:14.354 CC test/env/pci/pci_ut.o 00:23:14.354 CXX test/cpp_headers/vmd.o 00:23:14.354 CXX test/cpp_headers/xor.o 00:23:14.354 LINK pci_ut 00:23:14.611 CXX test/cpp_headers/zipf.o 00:23:14.611 CC app/fio/bdev/fio_plugin.o 00:23:14.869 CC test/app/stub/stub.o 00:23:14.869 LINK stub 00:23:14.869 LINK spdk_bdev 00:23:18.155 CC examples/nvme/reconnect/reconnect.o 00:23:18.155 LINK reconnect 00:23:18.413 CC examples/nvme/nvme_manage/nvme_manage.o 00:23:18.671 LINK nvme_manage 00:23:18.928 CC examples/nvme/arbitration/arbitration.o 00:23:18.928 CC test/event/event_perf/event_perf.o 00:23:18.928 LINK event_perf 00:23:19.185 LINK arbitration 00:23:20.559 CC examples/nvme/hotplug/hotplug.o 00:23:20.559 LINK hotplug 00:23:20.817 CC test/nvme/aer/aer.o 00:23:21.076 LINK aer 00:23:21.333 CC test/event/reactor/reactor.o 00:23:21.590 LINK reactor 00:23:21.590 CC examples/nvme/cmb_copy/cmb_copy.o 00:23:21.847 LINK cmb_copy 00:23:22.782 CC test/nvme/reset/reset.o 00:23:23.041 LINK reset 00:23:23.300 CC examples/accel/perf/accel_perf.o 00:23:23.558 LINK accel_perf 00:23:23.817 CC examples/nvme/abort/abort.o 00:23:24.075 LINK abort 00:23:24.075 CC test/event/reactor_perf/reactor_perf.o 00:23:24.333 LINK reactor_perf 00:23:24.333 CC examples/blob/hello_world/hello_blob.o 00:23:24.591 CC test/nvme/sgl/sgl.o 00:23:24.591 LINK hello_blob 00:23:24.592 LINK sgl 00:23:25.525 CC examples/blob/cli/blobcli.o 00:23:25.525 LINK blobcli 00:23:26.092 CC test/rpc_client/rpc_client_test.o 00:23:26.092 LINK rpc_client_test 00:23:26.350 CC test/nvme/e2edp/nvme_dp.o 00:23:26.608 LINK nvme_dp 00:23:26.866 CC test/nvme/overhead/overhead.o 00:23:26.866 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:23:26.866 LINK pmr_persistence 00:23:27.124 LINK overhead 00:23:28.498 CC test/accel/dif/dif.o 00:23:28.757 LINK dif 00:23:28.757 CC test/nvme/err_injection/err_injection.o 00:23:29.014 LINK err_injection 00:23:29.580 CC examples/bdev/hello_world/hello_bdev.o 00:23:29.580 CC test/blobfs/mkfs/mkfs.o 00:23:29.580 LINK mkfs 00:23:29.580 LINK hello_bdev 00:23:31.478 gmake[2]: Nothing to be done for 'all'. 00:23:31.478 CC examples/bdev/bdevperf/bdevperf.o 00:23:32.044 LINK bdevperf 00:23:32.044 CC test/nvme/startup/startup.o 00:23:32.044 CC test/nvme/reserve/reserve.o 00:23:32.044 LINK startup 00:23:32.044 LINK reserve 00:23:33.945 CC test/nvme/simple_copy/simple_copy.o 00:23:34.202 LINK simple_copy 00:23:36.732 CC test/nvme/connect_stress/connect_stress.o 00:23:36.732 LINK connect_stress 00:23:36.990 CC test/nvme/boot_partition/boot_partition.o 00:23:37.248 LINK boot_partition 00:23:39.160 CC test/nvme/compliance/nvme_compliance.o 00:23:39.160 LINK nvme_compliance 00:23:40.094 CC test/nvme/fused_ordering/fused_ordering.o 00:23:40.094 LINK fused_ordering 00:23:41.478 CC test/nvme/doorbell_aers/doorbell_aers.o 00:23:41.478 LINK doorbell_aers 00:23:42.062 CC test/nvme/fdp/fdp.o 00:23:42.322 LINK fdp 00:23:50.517 CC test/bdev/bdevio/bdevio.o 00:23:50.517 LINK bdevio 00:23:53.799 CC examples/nvmf/nvmf/nvmf.o 00:23:53.799 LINK nvmf 00:24:20.340 06:37:30 -- spdk/autopackage.sh@44 -- $ gmake -j10 clean 00:24:20.340 gmake[1]: Nothing to be done for 'clean'. 00:24:20.340 ps: stdin: not a terminal 00:24:21.277 gmake[2]: Nothing to be done for 'clean'. 00:24:21.536 06:37:34 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:24:21.536 06:37:34 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:24:21.536 06:37:34 -- common/autotest_common.sh@10 -- $ set +x 00:24:21.536 06:37:34 -- spdk/autopackage.sh@48 -- $ timing_finish 00:24:21.536 06:37:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:21.536 06:37:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:21.536 06:37:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:21.536 06:37:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:21.536 06:37:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:21.536 06:37:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:21.536 06:37:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:21.536 06:37:34 -- pm/common@44 -- $ pid=69645 00:24:21.536 06:37:34 -- pm/common@50 -- $ kill -TERM 69645 00:24:21.536 + [[ -n 1233 ]] 00:24:21.536 + sudo kill 1233 00:24:22.108 [Pipeline] } 00:24:22.125 [Pipeline] // timeout 00:24:22.129 [Pipeline] } 00:24:22.140 [Pipeline] // stage 00:24:22.145 [Pipeline] } 00:24:22.160 [Pipeline] // catchError 00:24:22.168 [Pipeline] stage 00:24:22.170 [Pipeline] { (Stop VM) 00:24:22.184 [Pipeline] sh 00:24:22.492 + vagrant halt 00:24:26.678 ==> default: Halting domain... 00:24:48.618 [Pipeline] sh 00:24:48.897 + vagrant destroy -f 00:24:53.082 ==> default: Removing domain... 00:24:53.095 [Pipeline] sh 00:24:53.398 + mv output /var/jenkins/workspace/freebsd-vg-autotest_2/output 00:24:53.408 [Pipeline] } 00:24:53.428 [Pipeline] // stage 00:24:53.435 [Pipeline] } 00:24:53.452 [Pipeline] // dir 00:24:53.457 [Pipeline] } 00:24:53.472 [Pipeline] // wrap 00:24:53.478 [Pipeline] } 00:24:53.492 [Pipeline] // catchError 00:24:53.501 [Pipeline] stage 00:24:53.503 [Pipeline] { (Epilogue) 00:24:53.515 [Pipeline] sh 00:24:53.792 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:53.805 [Pipeline] catchError 00:24:53.807 [Pipeline] { 00:24:53.831 [Pipeline] sh 00:24:54.113 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:54.113 Artifacts sizes are good 00:24:54.121 [Pipeline] } 00:24:54.137 [Pipeline] // catchError 00:24:54.149 [Pipeline] archiveArtifacts 00:24:54.156 Archiving artifacts 00:24:54.196 [Pipeline] cleanWs 00:24:54.207 [WS-CLEANUP] Deleting project workspace... 00:24:54.207 [WS-CLEANUP] Deferred wipeout is used... 00:24:54.214 [WS-CLEANUP] done 00:24:54.216 [Pipeline] } 00:24:54.234 [Pipeline] // stage 00:24:54.239 [Pipeline] } 00:24:54.255 [Pipeline] // node 00:24:54.261 [Pipeline] End of Pipeline 00:24:54.307 Finished: SUCCESS